var/home/core/zuul-output/0000755000175000017500000000000015157166072014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015157207575015507 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000470576415157207512020302 0ustar corecoreJikubelet.log_o[;r)Br'o -n(!9t%Cs7}g/غIs,r.k9Gf i?Eڤ펯_ˎ6_o#oVݏKf핷ox[o8W5^!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\_.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €' S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;at 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'BdIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË/_xY~7.w47mnjGgG{9_e552s4IG^ۃn󨔖I@[ tWv Fyw9J֥WmN^<.eܢMρ'JÖŢո%gQ=p2YaI"&ư%# yCùXz!bm5uAߙXC90뼯nNNXYt\oP@gOV ]cӰJ:^q';E=-dZB4']a.QO:#'6RE'E3 */HAYk|z|ءPQgOJӚ:ƞŵ׉5'{#ޢ1c qw zǽ0 2mK:ȔsGdurWMF*֢v|EC#{usSMiI S/jﴍ8wPVC P2EU:F4!ʢlQHZ9E CBU)Y(S8)c yO[E}Lc&ld\{ELO3芷AgX*;RgXGdCgX JgX2*Ъ3:O7ǭ3ږA :}d,ZByXϯ&Ksg3["66hŢFD&iQCFd4%h= z{tKmdߟ9i {A.:Mw~^`X\u6|6rcIF3b9O:j 2IN…D% YCUI}~;XI썋Fqil><UKkZ{iqi :íy˧FR1u)X9 f΁U ~5batx|ELU:T'Tស[G*ݧ ؽZK̡O6rLmȰ (T$ n#b@hpj:˾ojs)M/8`$:) X+ҧSaۥzw}^P1J%+P:Ds+ [h-,j7gDTÎ4oWJ$j!f;嶑, }t&&\5u17\IH 5O? ʲ(aqPϟ' I,($F{ձ <%fpG"m%6PGEH^*JLJ)oEv[Ң߃xQrMI>QQ!'7h,sF\jzP,mO(f=rWmd'rEZ~;o\mkmB`s ~7!dјCyE߮*|`В~_>ۅm}67X9z=Oa Am]fnޤOD:Bߩ}B1(zr6<ߜ!?U\(0EmG4 4c~J~]ps/9܎ms4gZY-07`-Id,9õ԰t+}]uemNi_󈛥^g+!SKq<<78NBx;c4<ニ)H .Pd^cRq+E-]Ky[a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o=%*[.\MA/Xp9VqNo}#ƓOފO&uUm$[[-HI4QCZ5!N&D[uiXk&2Lg&Ս76v_cd쿽d@eU Xy(\t=3^'8>JJ{>ED&k 5c6UQM3“QLL FN+\r]IrbכobUE=P?~݆]Ōe;BЧ>5ޮ{p?}}wSd^T@O>^UR1F24`tU.5Hhao<[70wʦZE􀍠PV?)3iڢl Lv%7+ۦ^4*~Um! #GukMmQ9@lm]Tϯ1b/j5 )X646зd B0C?]lna~ vO9rkCzFȏ>1k*Dv._O 4m6^P9hS  gG!>$]0|| _Ȇx1ڮܷnZ8dsMS^Hэulq 8\C[n:6GkM,eZc~=31m˰̈'/K[q5Rw+ea~~{;V |,SԽ[qɁ)6&vד4G&% t% ~@1c@K?k^rEXws zz.8`hiPܮ %nĴBԨlu+|VXnq#rEY7Tk sKpww0SZ2, u}vao=\SlUݚdoR0`pl`~9Dk[ۺ+4Hhao-j/??R<lb#P-^Pz>GXg& /~p│x@Bq"M/lna\ \0M&}*P(8W[U6WX ZoѶ^SH:K:%Qvl\bzPg%㢒\QS܁vgY+Pf7s#bd+SDL ,FZ<1Kx&C!{P|Ռr,*3] ϻ;ZbֶXekX p]Mj_vZ :jJ2^DG"1lͧQѶGM]}yx^l 0JM"dλ=`Yƚ^"gJT_>t8.H#',c@V8 iRX &4ڻ8zݽ.7jhvQ:H0Np: qfճ40+^f.s{hkN.g^s7`zDc D]ͺjgw07'㤸z YJ\Hb9Ɖ„2Hi{(2HFE?Z.U6u .hD32DGE{W .d&Q2^B!&! FU NC@Zk)幱刟o#ث~?ӷB!hE=|]3-%MG=R;Qs5JQI?icCH2isޒ栒޹a% Ŋ X!cJ!A\ ?E\R1 q/rJjd A4y4c+jQ̘TT8!kw/nNb͵Fc\G0xyO sw5NTV12\7<OG5czSh'/5TbW > ~Wޠ9dNiee$rۭyQ(J:w?o{zỮ/~+w_eaxxq:yeqQntq"Occ°NRjg#qSn02DŔw:ؽ 5l)FamUL*t*!j=ÃywSXE*m#U ùRI'%w μʕ+88ztT:eEK[[;0(1Q@ET0S vE7մ{b\No*{:Mzw =mK ,\*wٗYS%g,0\ Rk k8P>x?myڈۥLFTk"5l9O'ϓl5x|_®&&n]#r̥jOzJS栉 A*[Jd >|ק_;%X6Q@d 8&a)a.#ۿD> vfA{$g ăyd) SK?onxlM?~n Θ5 ӂxzPMcVQ@Ӥohn?le,nrQS|z(N%WгsY@0℃= Ov~X®/p:MP yЋ{ Sym^ϑsUz?ܓvjIg3MNv]fخO)0{ zz2 ޕ6ql?N/e1N2i"0(HKkD4<80: M:'֥P!r "Lӓݰ@ 9n# " $fGgKQӦ4}Gn\^=-Y5PI dPN6 Ozځ/פ|5) F[ڣ$2*%&h v%9HN H~Q+oi?&۳)-nqK?2ސv/3,9ҮT9Cef˝49i.2DxatC<8iR/ƬйR֌vN8J"iJ. T>)qaY4ͬlyg "]BvW#99`TegõII kюHLa^c&/H^FFIu`2a$mc Ry+R:LڕDܓ>Y:]t.+|PT6=qWe0NƏw<6o3mv8k vGOfpEOkÈWȤMف lOc;SR&.w,qk>MPs+Xh4iyuGRd֞q鮺]m S{}]U kV0/ŜxtADx"Xh4|;XSxߵă@pE:y]/"(MCG`ʶϊGi+39#gNZYE:Qw9muB`9`LDhs4Ǩ9S`EkM{zB<˙ik; JD;;3!4 2Y.$Dwiu|+lO:k$]ԜYLUҞ6EmH>azʳ/A+ԀZk"f`.,ל{=wh|_qYj5M{K$gv>cDp"'0޽5xCNQ1G2})*'>fC۝'*)"5.E2IeD 2.ZdrN6Uœ=n8D-9޵JKw5ُJ,􋃓ZUꋼ0b1f87GՂ 1t_o}{Mr7KO0Ao-Y*Is\S:JzA(:i!eҎ\,f+,Ąt78~ڋ~?[F^.A'!,iGow3{'YToҝf5ޓ[he>=7S8DGZ@-#]f:Tm?L{F-8G#%.fM8Y='gیl0HڜHLK'Cw#)krWIk<1څ 9abHl:b3LjOq͂Ӥ=u8#E2;|z꽐vɀi^lUt␚ɓW%OVc8|*yI0U=nFGA`IC8p+C:!}Nh,mn>_MGiq'N~|z`|mu}r:"KiyGҪ$& hw#4qn?ܶХfm_Ov^ܶ[6j3ZN9t9ZMMM)I[Rχ/C|W䳮yI3MڼH9iEG&V 'x`u.̀ab7V<*EzfH{]:*6M x-v쳎M'.hO3p-IGh ܆hR ]zi2hB9'S_;I/d0oIU:m/~[*K1QA="D:V&f:{7N>^uU` c/X)mS5KC߄":{H)"%,!3w{"ZWÂk>/F?RJ>FIY*%5Hg}3Ď89؟N/pgÞ tJXB-Gjsٶ 3Gzp؍H|*cyp@\첹,[up`uV,\KCB\qGiW痃[?i?S{eϻl71X:݌>EEly(*SHN:ӫOq{{L$?Q{϶(F_Ej>3mqfΤP-j)H˧&8?a?2xĐ+EV؍x0bv6 fd1^ 2ӎԥ sZR cgu/bn/34'h9Dݥ:U:vV[ 'Mȥ@ەX㧿-p0?Q6 y2XN2_h~Cֆ֙82)=Ȓ7D- V)T? O/VFeUk'7KIT, WeՔ}-66V؅ʹ;T$pZ#@L; ?0]"2v[hׂ'cJ6H4bs+3(@z$.K!#Šj2ݢxK-di +9Hᇷ絻+ O.i2.I+69EVyw8//|~<ëng)P<xͯ~? fp,CǴ_BjDN^5)s('cBh+6ez0)_~zJz"ё`Z&Z![0rGBK 5G~<:H~W>;ٍVnSt%_!BZMMeccBҎÒJH+"ūyR}X~juPp- j\hЪQxchKaS,xS"cV8i8'-sOKB<չw"|{/MC8&%Og3E#O%`N)p#4YUh^ ɨڻ#Ch@(R &Z+<3ݰb/St=&yo|BL,1+t C<ˉvRfQ*e"T:*Dᰤ*~IClz^F6!ܠqK3%$E)~?wy,u'u() C>Gn} t]2_}!1NodI_Bǂ/^8\3m!'(Ֆ5Q&xo 8;'Jbo&XL_ʣ^^"Lq2E3,v1ɢu^}G7Z/qC^'+HDy=\]?d|9i,p?߼=\Ce"|Rݷ Q+=zxB.^Bld.HSntºB4~4]%.i|҂"? ~#ݤ[tfv3Ytck0O ͧ gP\|bЯ݃5H+v}޹na4p9/B@Dvܫs;/f֚Znϻ-RBz-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+/ػq]x~lQ^I}dN6g3gg'C_AIv4 Cmݸ=s#  $FĉQ4q'[JP&oMP$|AB0Pu BRĕȲ(JnVrl!CUJ&\.!$_xB 5/fc*a=K.?Q!T$r S{&Bd` Ne <>XK U)QjH`5iAz܆8B?TUX4S,ji |=4Hc;à yuU#Sp&OR99<'"8H'cqQM^"S"8\Le?"=Ls$9T S))SRbtOlɩ4#mEL//;EBe424CjKeqe7t_WEX$]䍀eqEm"YD*4#ўM"C|p0\8q#"4ibܶ }m1nx\+F3?rAuWkɗmsޠUh}_y bD$`D40O\Da~hǓiYd2xAוcW,TsɨRb/H7s-?/H'7#@귃_hAV,PވXŀI|޴<>3˱)BɁں{}zqȓn9Դ\۵"B$?U込c-d1[^PwӅ"w*A5_so-Ongd,YGA>QIab V,iq6!L*nl?Q*ކSXJl{ }g5/(ʼk*DuyGmHH$`ILRf;0D ($G4y3-Qv])qHQ\-K-92 5U!O&A E#*Ŗz> 嵰6e_>Ɗ5SiLy <Rf#! `RgW<*G,o)c>gr'FU!:xt)hVEzrOwEVWդ]%XKUߒrKo^Q8S2~tIפT׉!}KJOsuR[U#կ_r1'`)|b%䣨?/q']ct,\Mк\ ] %_8SpnP0JGgOyCUͣZw?<)V<A~;,dz3 !I׾7&e9Gݕ4m)%hLm v;Aď,2hDVy)bTRju*T_s=w53WE+mŠ+)55ڙYo]vn,aV(@3e]J<6M^^8Q]3Ǽ剘>aTZRfػ?Q88C!@l'BjMx<诙@U #{S쾝˺CP[ e)d 2 QQAL2 {w$8] Si@ݑ\?9( q*;fms! ϣ9?!:]PPe2oˌ,AA[)N/NUh O9TO^k-)xUH1Oͷ;&!&]ZK'_b:i\!DyH̿{ݟ!-h*P¥g :*{,<]߽bPz,?$\}Rn[O""-:O\#&"?:Loc7UpGO'#q$x*?uvG$q:b1LO־TU?(H$}*H WTH*3$ vN0nwdX /,']xk>q=oGP9nwfFU<4 8TG7˳E7)7GEՍFڅF.I1^jD*ץQB72Qē )0-J ķHߖ㣓/E7_~D+ + j4}< g={NhRy80VEɤӉU,pPtO"Rk q$p|rњK@mG55*d6 w|Խѭ G8R ʀb|Ϋfj0~DTa$-oZEkR\61DݖXfaYcO)X}C` C/3u;g:bWyy׻n HMKm0G{6 v_<Go[= KuY@m _-#㩬 Š\{e)+<mxaTUq֚uj#sD)Ͱ"Arq|!]y.U#oO[+g^ )nWC^f|nռ CHp (cUrE=qu筇s;tt.φ n4RSIU!"=^CEn ^I^`j&Lzey|25Ms j˦d8 . {X:%*<45)ՃhuXu@a\/ʓƺY6TßR2;5`FS+jl4l+Q艪s= /KE w_H;;[R<6.-(gBؖtu+ (puNڕqw_lj,FtH&\*UlsQ 0,dUM ѕ.R",WҐV%?8  Gh+yw6 T^z;n˶V/A8&!rř{1,6tS#Cp-4uٲڣj[cYΘ WކhKRu9;6 ϨO*,̑2ԥZhU:5JC1KwN)kIk ev䛾EЋ|+04t].<Ӊ|DubwM#[k,yjwP[}8 *$z ! m*c3@T}ΫM !#ϒ*8(y5!k! +U8I!<]]hи~S>`"#ur8zt[z]Gi-E$2ْXc9zd+t?eKuC8CESW9gnW88yx)eY8H&V[ʾOa, Rٗwm8 {K Wޙz7M5+yZ,AN}2Vd M u&ry][p<"8>ZQJVGykݱ=\U*#nGR0>{徊[8=sn*CKVYurư1g]~yg>e.ԭY#ASźaqfnE.`2,f2)1lMk_E sw=ɲ޹%V+Fĭg=K]k 2X'Wt-jkD:΋<ɧoQm#VF_\Ғ8 98 j-\%k/>zX6~dJ{la/01dT3}WFxf8{2̃k_1B}*&#qXPc1  c>t_=IZJ2B`ITfw1˲h [/r9`mfylO3˧{3ɍQs2j{bQ;:F(3@`^rO3j?jFF]@g ;?Q0:Ϡ(0LvK(ُ>G7.TezӃGy͌!}gyφqz8K1;/ ?FcgKc[ȇ4˜=O=ԝTa؈=j`n}lkW!ۆ'탫"o7وyeLd6b1.z۰ۃ)Fl ]@l+vɸûsuBasrd¼<996ژ4H=m[ gco$ g<y8vg")$jIN6$Y#tm;m9J0-E1 L#s+(AЋ-zŷ0뫴^`wϔ#aOs./bOX 3xq?Z(TRw8Ka<ׇnkiWT),y0G(n ˎ"Ok=W6orEv\uYWH9?8^οm2' \O /!ca|ܷxEbqsX(n䀨¾]߷ﭮpq~qe `qB$@iG=a!,B#HB>b@> `⟺vz@q'Vl땯4 K>u {w f."ޠ+_m50xĝ%}nW :^{> d<dP$6%쇗 l zP(,ODq#@ѶS|ߧ,OwۡC9@ltPHwS}sҿj `goA1˞Yc:00x]tPuPP,|:{82w-0& &(9#4jҥ5@8 tC(x?(#@7ހ$~4)6ܕo/^s ?C9Ŭ1·^MsV^DY3+{@`ѥO,4`{Lw}g chE`ݑÐ>ߒx&.uV؄u?{]a~z("!D_ U aOk7!-O׫$2R=3 )}q!n7"w"IA6D"XS-wW"~"2Dh(-°'EvСO dNR?mI-pݭw9x\I*H*mCR~Im" !_2J A2FޱC2Nb Xzibs\i4ץd]fLXnD_uR 6Jrދd#ۤ< Lr eE'[(Pr ͣeΒ"E@י$n$+lBeimBu׹DF*yhlu'9) %yEٳzŢZ@븊F0].J`gzܡ/o`>+z>X9,!YY;\bLG/ Q8%d5 U\M^ULTf_9#GTQ|xIqqe<`R˰"䘚 ؐgpaWbEL_-7in?qh\U)2Pq K}ξyd\Sl s>У9U<:JCInPaj2mB͒pl: $WSYw<\ b nÑAp܂RM/Xυ(9*ZމΓRQӐcvd Nb2ˮyU֌@@پ̼x0Rh/[${+ZWs4&႐4q"^ Z} ?*sޫ%d>9L"ijY@Y6W GrW,%Å: <՞9)犭=WdXDz|47uIF\W'5Z 8IN/TCè'1RR-櫖"$eY LJ6pyDPkAdե Zt9K.t hV̰&DNդ 4EU4a"7BM("kOQp4UQaMb~ [vmt U֪܏ڬ GFm8f[kQ4^;vW^{AS֖a$nԷ^\5=Jxȍo"lt#_!P~<)A}4~ELD|r'p 3(mwܵ=RN_;?)-tFvjN1MvM"l-R_e] 5%i]mA2G#Ut>Mq e g %v N({임]1l1eTQeVӸ؝F>Cб2|Z|wZH@N#Խ'zz;N0B{@N#Կ';Nh0B{@X'T@؝P0B= ;* ;*w'T>PyOB넆;Nh0B{n!tE.0T[z4~)Yq=#ݭ?6}B7|(pe*Ͳ# 'X.HrQq*N"CtCJQYˌ٭u`7j@PD/ho4C b!f~yaރEϲ]Ք?9^H̫g/*xLC.ܧQPAbOcэ'6}P_]"(C+2WoCeajA]">.8'.RyR:96h4CИa*O Z1j?U!I56n J4(?1x£?Ya6,wah>D+3VdT% 20[hZK7xw"p:Ϝ:fPTկ`XM$K`xvZ|G(8̭Z= eA 0|._7Rk"綔6@£vE_ꦗ i" q6p)ci{YѢpyͭ?9/gL2C%sEޭ͖B2_B_)/iVƓWOsx7Ut_xΆ[fuYL߿9*xry\ [1 -5!MVU/<5, ^* `3CCϯ8y^ sNdS]g[ JQ̬1{c{j@8<OXƌJjD杊r.g`#vZ8Ev? ԖjHlh:fͣ"=M>Ay4 ZqV0Wg* boj>Y8p\jDG,M.Jsk3 'sM#K]S8dY5pFNfGd_2N@lm0uH'¡2SWݝtY8+61s,9窅.Ebf9@tg-uYyONX[',uLJ[p5а`=P,1W"uԫpZ0>dp3@畊T%*k"Wno3.~m2{+/:ĮSw%d\M؅$Pa0!֬# K1X_݂+-qWYI>|P2 ]6ް"_q*dA~],AR뚟Q*D-n>Ƿvw(r<_U*]FdaeN@b:'"]s'Y!wͣpg9ȾO?:ĩ >~8O/oߝկZVǧ'ߜksS8p  x u{8׹f#Y߯(GjBW: r@[|UQ/GL8J3~}P$SiJf|ܨ5TdhکUA#q@hG./]qVgCu ?2v]bNVn[C+K0aw`,)Gv'*û\iq=P1qbԙ5=vؼ,М:htTc?vvK-\3(IL@cno׌k}տǭpJ8Iozh>YRZלO+ϼ~eY P*?׳@$rHb\_f38Ư)|9j(ȕ6A抴2ƶNg,ST9*}g9˿D"UvJ׶ nV/"&THahRhyAT2*@C94w5+ S{ތ9|ږiԅxdZx_ShJ=LHXR\[&9\&)rV}C_چ%4QO:Opbտ4=o1\/ts!*rKlhQ)pKMjVzrQ*-k ثQè棣pb6;_?/,aUQs_g^I9?LvIE'sw}^K~p׸oNݿ3R_')&Pvrѕ@' ~LϦ|}F ?g }5q?nRnz.7?tN:dL֎{ǿ:;d. '~3NޏqЉ7{h;u#gnT]w%1R :?g(ńUh731Xޏ=l4b3NG8XUU_Y2:1+jLBD2۲vs*;ڲqHX*&)RI81-O2~#p.7nkaѫƴy se<>o+RLja:l;9v6Ud{tysUaĐCPj3ZU3];bV)8ĖO;X~$A$=D&z}k-y:%$&@!u B[HWcvZ꧇N-y .آj+s$> ~u N %nšr ݯaOgba=Is*kr(֡Y.dv 5 aQ2myf2pdmHK+Ej&1l Kz^ # HXg?f" ap!%m_/\KҘsM1BHb`c}\.C!u_2LY%G< $17\F/fzd1UK ֩FH:sIpTcrjBܮIh&/[1q.*c$m{F m~RO$88J Ҁڷ2/KbJ] y;ID@;YvA̳d*Z85 . FˋKE\ W_I||ܛI`copTiaB1[{ SŠ1 kI,)lIjN; 5Ǡd)P XY8.R3,jtBH&`İXPI:h-  B [ eLF#N!7;Vz _}=u^=efA/cU%|%G^=Oj7j =]ouB/S}2cx/vWjPx$VR'j/L4 [ei{EvކBLk(WqDOG5W*,T]FXlAIu: v^Uw@j =1M.Ϧ<2?xڒ@-+]6H>*Q<:J^ɢI՚b,ka$ P TNA%%ZBLs,Y+ 9m1&{(H!9]2uf)k8Pb]m 9Dy;/їcf`]iA !eGfgDB4 ykAVV1Hj2Q̗%U8'|G3buM4-ihBp!'ܹT }s3)[ ),' :KiKX{P+$yiMXokb:N yUa'|ΰ&xX}|Qswuĺ] Iу+"fN2>IjjHQ9'3DС9)a|*ȩtGIfw)=ޫFqlOd2|_[z"qp3+D *{γK"jHX` !|gk+Hԃ=!&ThV'2B gAHaѷ 01_U|ԊVȇC W5yԂ\ץd{sbrR>ZnbwblZEpsLgu^ /ꐒ4tt?8OwD\Kkx^KoTw[뫚:Ԉi3܏dZD`2iR`MU`F[_sInjxeg>հhPAepL*CW9~TܧkdIkmSQ"$ĶF[5w1nMSGGUQ1ԥozMRw"lPa3u 7jE iu"Ŝu B‘)007٦h2,,s&Ne$&BiN.ZNjusOz1#n`9&?_]iXV)6HpJ\>LG`n5zpYҗH Dpr:Bf3F ][ɼo! ʑnJlʔK 'YkLQ9QOkISf|!1ث0Eܓ,nVkZBlebl m%X#"HiL:Ԝ|y?#ׄO+Od ǎb}Y}8֤$9[T aSc;{[&8PYҵ7t N Fy1޼$-cS)Bk,lp .3č3usxQ_7u˖I[h&-\Z&8(G-<iׁS!'1.5ufMĶF|*.Bs#!5%Bҙm/.)1 .l,g  ݰ+^r.&%DV (W3Ő_d1x@(e/9S\!b0b\F,^^W]@)e DI U>xHboWz QHz&snKs'T1.?<gU~GPpJ ^hsWL֚Eb<[67lFY$Q[ <{쉙j&w nyI*4l@~M_EKw|- j["Geʮש2B W.W3=\}fK= Rxr1^?=Hp[)}!"O 4=a{4(3  2CHե 88R$zo;tڹڌӮr 04}.‘K&0~ageGjg1Cx *u_ SQ`^1 -K1IOw #霭>4>ĚvtM41!{Rra[s>KK)kL -g诪CdCdZ6y=[m|FcT FdKlƤAeLo8J=4%|q5G/}D/]w2wF_(苝NnN7N%ca$SWib=SotpT'}ë|äpp9a=>N3\nf$ApmOsmIp qa;PzV9(S&^ʟ}C}Xx^h-gw:=lk w:[b` %0sZ.NqEbiip'[<; ̹ݗn9IšGIHptDL{^{Q@hDP!W¹#윛iS!t~430QTd:ƒ e*J*]څ kD WEX|qMLbU֑  jz">}M!aİMMGr<Hɬ/k)ƪ ~`6"נa$O) Cΐ[lǑ<jz$`;K腡9g W钰 3Q"3r *{γKb=2: ^\4U X<T:%01TΫ*ޝԘٮ{Byj$ﵶV*r=o +&VsAH:Ol:8 ,YPT)f@cEEh\iZGɯ m6UyIGȨxoI ZY ^P*7+sNp1ΣdtW>e[)ИH B2-M6F u+N,Ķ % =<-q1A -QX{9xLTiZ?x H#{ژV29$~I^C m㋹8#<XmQ2rt7$8_`u(jSz9* >ɍMYGэӓ1;;:.x4tF7@p{p8chp콹l}ufݶ甕ϣx$p?~?{`vXf]yDv IɋQ[|WS{u3 G]>IOuCQFq,I->@L\;wG+00@jI4JU'EOszp8 z\s0ɂH>u" Tލqiz8+-Yc] mUL@4gSJ=01"2# C今W.p3g~#k'_r^UOmxsҡ,$s;.BӽkV_ CSZF Oe~rX @j: ]&QqD\  4#0w&@W ֵ_HN4X/ 10-yϷݞa+GgJ|1ح霵BU/i B3$ %j8quc̰-WKh46(E4W-92/FYŤh~*)yjrcИNI1a:0GѥW&cWݫ{3M{d=B|Պ9w'y0`ai ME0Bf)Lf'#O0&Zl=36G ;y|\ 45RD4UVhTaㅖgf`0>p|v{ Ҹ( 6h/=\>tfFHbnJ!/w6dI竝- uR==k &fd䓲Xp`.؂Z`䆑6y|a VAmJ ID4YJGOã$:I6v[ma*jg)d/$A-^/͂D([#Lo o͑&m c<}/n0=B(ݖRU0a{%n(Ķt^#~Nݘv F<(wwKp*%yWZ8H04A77K~K|P>󮴂aA2o3=r$YY╗:U x3LNL-Kuc8UDV6zY+y"طb3C0Dm.6NϼjDMƋRHd)4iƛ 0%18qyc6tw*v(Zӗ[J44⠏QJ""/BS`M7eڭ^-ik㧻mz/;䉽8 bgpu\#(K \!TR 86(TXdicI5G wWkOX({9u;򇗻sC;':Ɓ@!k,[3T񚡐ڷf,KhpBa) ĜcLXMLh)ILm M'beͨQ-ͨ5‡ݏf,I'dHp2NiyL O@$M ʔ1IU P!ZY3vTK3vxkqĶ1l߸=:),NY-t&8BT11& *S,5I ꄧ"8%<[y֔5K̗nf̼q^C)ZƱ`DQ%$J%ANՙGʤpffG|fkyfЖzyO04خ:@_e {TbqSwk)p"\yq(<9@G'.ӎ/rIRWc|`=~ɻ If P:C5] @i~H̕O܀yv ]Nպ)_DjSN=bnP\)OYO` W,z)5HM|CfW J"ts_D R0%Σ7̝r݅Ӝh0oMn*43pB@"5c ,VBDA]ͺ*VSc fd{ l}W~;`᝺oS 41]*0|k.$ Į(Sg<=uYeU՟u X0^ 6|ìoSN bjrدԹL opvd Xi+5EU !|׶/^E;S}oW'_.zέMn号6CK罴b7o_ ۑ+u?!֯Ԧ}vVGtYK3i}wM/z =L2ݿpU@A% Otm1 W Wg@'3>lW>29*GhÈ}!1A5ݭ.z >^L ؔY=- `bD$r^+1d 6mL7V'TS ̭4M}\&lq &0_OpZvõĤ};k BV1\KZl#ܶ`۔Te$iQiB%X+pU*(&" Q,ά3-1a_mPSZ4$Ms 8^4 ʼҟYK57UC%&b$M$!AbEiƙf3*6O*Z85||g>B܌_Z5)f2:1H0ǒe F3b(Se,b&ސ'T$mJ6 6qjϩOuӤ.\FMxWР /nN&:|p$C6(;R,M/(הT iby,,#.c`8ӊ`J Z9Z)EO PKc&2cHJ!0fL$  EH22px7' ({rjYcpM2-( , dexe^领n}.܊Wd N'dN| c!͟ڄ*C̔(& (Ij00%BpI)43JȤT%h_:bc_zkgs-q H1{)y@T`nJvE3➪c GiRrd$ vX/ ׺}YTf!M6xyp U9) ܜʧEE$ X"9k«T[B,i+*I9g1ۘ<=u(94戾k! fK4R$DYRS)'>!:iG9 $` ;вD[a%XY(2>eJ"7wyJ@I K3/KhM"}*\K ͲhH$\5'h !C>I28 {lj\gQp{:Qܓ96XRiv/` 鐫͑khW\m@[<+-e|TIQ}=ڋ?Rlz"[YEÎ}Vsj__n6Z/eb$FsȀ$k$`JRٮGB lXF:P ~"u ZR?:>l EE"$%3ҖɹLcޡH WBQ^6)h)Ƙ*,#^PI(2)3FThqpey(֘N:4$inɵhuf4(4m$fuZC W ~.MbW~Y%I.Ox3?j?c=zIMMS9X4H: ~yj𿗓/տ+O4_;y_6Lap[Ɏǀǟ,L%x!?+-,twS^}:c>z2΄ x0:2_52,jVD1@9. ʯ%tq9"Gw 4j|LBjo~GY?BG5v>$]AcGASIj ݏK΄\]w+b/+ST=7XL>\oghp5u0 抖is^RG>/_%zTB0,?US/72?K|8rHʿ}sD,?g!Av.G0qQGCxoaO> _ w) ϳy4Ir<\b.g7k98&4ᡓS!5lhh)0N@p }ZGܒ9^mbNllWfQPCrV%"fՄV"):? 2fķgÿ-lV߁ 88Z@ຌZ fR|m|^7\ f]r1Yd9?w^otf}S4Y88 F~2]+VzJPWl>YތKp}O>us+|qv=e;B˹? ̀hJ5l<'+ R&^h)jۛA+rkg ik$u0@⳾냛Ga}!N+ߌe>6ܭl4zwH|f<96E0+94*-bLVqNQ:uҽa )h}:sI@6B,͙2, 69GY# tMbLY90nOC9K 8Vu~\Z3teF{oyz%~ַq%i}rkcFcKDZA%Z"`i:D%V Rc#b"MQ[ي[0Ui5e dيņ:y- 5}J1R*WI릝X2O( -ee`rc[HݬLZa$=l`O,C pX(VjIP.BFhS޸_KUu؁sl qD*LsVu/iLXZ{K5EQuXFE kLgT!8qYQV$C>aqgV#YO9y* Ѓ7R-uLp&]1kǏA9ۆ*y7wnk5cFcYV,TaۇD>~ԇᨴgJ;юfgJ;HV=n!|-G:wp[%JeC,o7]ײ %˫YݬnK[wԒ"S ZF:JSRsㄓEU"q̹g5^rIJ*mj ۶L o"떉VpmD+b UHhWЇNJwք,Nw]#R[J,9FX##eFT/=HzYRHJ!S&GGUQ ŞȞQLp8}Њ'*egv5M}&e>[I]PpEnSW}4`,_ Tqh}+qn,LmР5:3h\ӡAkT{|q;%gUCoSCoۼX\duF΢>KÔW>DcuSV2@~Z:%B֞ͽb$%vdE;.%NVSR,:*eeC̏S#L-b(X(,+Z0N[h[r@ 3n@KUD.)oѮ¹}X%ZP`[6"(kD5€FWD9OL_u}(HbZsWVdjEx8 ۾tnG:Mv6F+/7gG8e|ژHr ItxNI|Dd] &*v]'tZ.EZP/ / LO?q h-=g[$^z1M_:AFoZfF N ]JN XX.0Yu8wXqrP},؉HPoi>9G.ԮT}}%iʢGdA"_:Խ~ 8(%ֽCSii ,`NOV˄yQHYCo=]^2՚zTJԕ|M=[Ό2ʗfOLS;M`D}=1Q\:mT4MZ}8eTt7?*C,ϐKg>#<ڮ1n4C'Pa7`ǪyR䘒?B_# 鵸WY_]򆖪<>ޯC^*jsͯ E LKS"a>ۮɗ];7CB|9kj7O6[g5%$':[Nӟ-' N|(.S~I[Zz!GqFUR:DChh>Ra҇1rC7Iӑ67y q BCvS/!G>LZIJNZ% t4qivU:vJT2LG|t F|9ݕ}}0YgDVZLxWI'Ȑ(;12=4AU3$=5oX:bB"2CrHYt\6isrm>T®+'+6MGk_crmE n/oK'H;ѥ b:ȪkIkR'č:"--3 I\P/=."x{O|x"%Gݿ-F*lMLj46EE\`Q`uF5:U7FWP5uS d$|(o.e<6^#G{#UpwƓT\r㍏ mYq[$w},_CߑZSEd7v"ZOW=WVŒ=X^uۋ/~۷篿-o=^0zy?I/wyK%-_>cbTwLx v2E`V',|2IZQUcd-B'`/+!ɿ2hm0GesSyf=1xsi 0;P{J#5T ~wk{ݚj:6c5|6z&T6¨]tQYh]\S߅zy/n[5 ڶE5 V۪H5HTӨ[l*P zb4V28#<,r6ת"u]zfAFπi }ڱsyDVaWuTuUg,jNua¼V6ՈJ*SQ7=q4ixM &b)As7ٕWiO|ܠ<xƏ<l iay#eÁ| r[)X{:cL L1ߧoK=#dgL$G -v{`l\fiQw!R>t REQ,n<)Ш-IC&C1hd>hhZzt+>Nev*wBUĂ3,hY?B eZv.Y@5;l6 T? Lkw0N8!|@$hٛ4{M1Z`=4:tl`޼C0%|`*fo6{ %jڊH.=[*cҏP3H}6{{Nf/3vC0(Mٛ34{()Z#369f7=F8^sPodlf^o20֨9ԀH6{{~f/%oifRo^`fٚy[F=s7giO3 h5#ZyG;̊*UwPʀ;jQrG `ӐdI9t~$Rq8QT`>Vc c4c1jhVS˱l*+ڲkz>h3l䱢Vyo;5=KC9O9s)snT|1QsWջQ>qه8[9F?uR,(tYd1me=R9[@+auп.H~i7㶹3?B¥}zѻ5߿YT_.W겸 ?_ 4[JWYvNkhjۖFʆKQJeQW]: U|˦zů&[_pS p+@sif!-7G)3ϖoQoU=Ֆ'lYmwߋDU7j*Y*]\pIߵnЈeZTO@`h,`llr3]1%j>JXI9ҒFl?3XHf7(h1D 2idF]yB7A+.9bЕ [?ZBcyϝ6{DPEC_Q~5k%e%F5|rKRY2AQgKeV k0 "7{"M^WdK f@AGƘ|:]|}Cqj0Da$ 2 Q\``蕘`v3_(B ƭK#̇(X)D1HU W:(~+nD' 4Z)*0>lɪ`[u0 Q=] fM"2 1wÕLjg}HhLD\ >6#n?F6#Ά!H3`%g2]>yIB&<gc?XA!ZxdlVʘ##<"P2GO~\~Y|d>Rԅ&rRjT[7 2Py>(a lF8rµQb21$6>|HH0N!"&ش$6~u) 1;+aK%̇KXe.%Ĝ51V7#Њfmb¸̆KFDǸpšh&sɁ4U=ۯ^fyzޖ#l{ctblը¿gYEKF'UҶE 6uo{_lydXRQ,7:|X9CѻYAdf;˽8-?ԕsJmR$V1 D1vt팰δdv9krvq ש!ߡ߅L 2~qN1~q5IO/7Mkl#7uDEh$*F[(d6lͥ Zl[rMpq@(/M6 hYotѢ8d"pu^ޚܒ5R^˘Z{lܸI:YO Vb0 a.VE-Eh遹8oǂޤilГTbJ+Bc遙hDYA-#ACF*ʁֵlρc_~?0$|#" 7 {} ߌ2`foJG盗'o`YdOb ʃ+o"lV{>*9*6֠R*h?K0la:Yр^,\뗛ExHw~60w/w߽kS|Ȣn|f&ϋy)jx;0AwsQ~j. 0; lKnȆ4EddkGbih+p 1I`LpG J( vb"ea\Wfa!ͤڌ  GċBL[ȡ|þr2M -*jK:*k7В)ػH5se*W-ۣdZl 6(d>d%vV9R{'іζ9A Qg' mtiz)g7Q# bv:AVsy1D/;H)"1K !d"X!or&$"IK3V1:]ֿ.tˢM%S4TrcɄ2-qq̂JMa$bhjtД92ei7,(a,Sn e@D;" ihEg&Vժi)aI# gr` +(.21|HgKciSlL/%us{az8d6emʛ(!8gCo8P%A[sf0\E]LLI> :J+/Rl):޵q+ٿ"SkI|?vm'1Y#u`3hԊZ{ozb1i1AG-լbHڢ^B"B @d;`!@Ƅ(P%:B%Zhƀ"(֡pqIVPe{a «y %Ot/!́  !)Ug} I11qA؈ĖH1B1KMkӗib4 tXt&SDSƙqpPxi P,DH]YBрVTQ5|ziPp"Z~%6/ZvZ<])\"4CVSP8=4)_3&}R'pr‘WRbT2>N[HsQL/ɅQ?R<^wx: 8 ]X]u4`]8H#R|d4Q4bGD([o-9 慔fBȌ /בD[ُʑ[9g*4Ǒal,*q4X"/P-1P ЧHJ[Aśߟ•Uc* 9J|O$!99Bʑ+oseoWfJוF $OCHS\Y#P/E#+EyL+Smf"RD┈J=Iab+~(#0kdpTR̍@1Rqbfl.gAmv&Y߼L:xGv`ffjLÚtP]LD+( 7P/V!2_bc߼Xf7iEJn,y#)@4,R]ͥT i;KMF`3Bi jɆCaF2`i1Q}@h%~яGEs?p%䙁< ~W*)1PV=a ZӛtYbZ =Y-}kje{ `HiԾWaxh3?|V `2Ro87 kcE h0+; (#]F !Izl !;{JHFJvMNu}ZTލ$*;4 RqǶi^_Q`oa 1@'Z(VmdQ\a Ӏecp2J-tiD5a!d0DS0\ly#/a}l@lD=;̭nvd_ep&)ڋfӛӵ">Js{ W;JS̲i]\瞀g2RbBLP2t0[kM#;p4rmʋ*^ܑOd0}Dh0rtpmeMlo~:[0Ƒ=|ucmM,wHӁLs6 7^p0a'Q` Wiqby&;.<Ձ7bOv6ޑ Y-[5ܾ,Ejv7C=jK->-66Zy_RB8nܬy+PjR.2Ä íϛѺ$2kE߈R"}ESF'Xiu Fx8,+~o}WL"ORGy?ߘ(}ĴƳpN?BGY-uoUd6(+VJ| nnES奛ioȕ%ZsΈF?lqe6F<_~`B=C#'kOo a7Oݛ[l`p+m sδVRQ) &Hcv FQoBeDAfm0r$jie_fLDa[ %/iyi 8ƙxu*[?vи['[um |0GmoZSyuSF+z;հ8?ӔVGM0>VOݫOiԕgjiҸ^bhI{vJxڧ(*5F9(BLx#-M4X_Q#"uT-Ё*.8,+'Wzh2]4ypɼo8u믊?q߂NFI^ KOLZ\=a`^Y:-~*V:m=\Tjez,!u.dRw(M0WDvm /g-f}9ݒA5}%evA|зi0l0&&{X Vid_{ѤY+ZӮf.4{HhNtm'?}U.c_m_(|se9KNǛIX0(4_Ꮭ,M[ L.,My`.c{~lPLJ텱f5F݉Ը̦EtO\+>T[ "Q~ot6g.S $ylCLO']5 7ͲasqE|IwfLN0EWޥo=r6?vǨڻi&C~] I5MXz4Ŕi۬q}nJVSa}fJVO]KiWIɾre(ٳd {E ,L8Lb*b`sոb5,&q^$%]%N+/cG0ȟ<<oQ>ۼij޽PQCV^.`,.72޷6cy+KW㬠yyk6N&|nuԄ&N),M멫ΛmVG[B,4 -Q]YÛKg550ͻ.bE0nI@;[FKvn(_Ǡ8~I#)FŋVظ?fӟ&iIrQ@tC}G0Uuls H+tz. |L(`lMfe5_Zo_,u݉+QyaWҒl81S@Zcj*_O>x⼁?mF@ p(kT:P_LޒkX0! .~TUw 0.;E.i&I)VVc%5rwTWW3g:_⻓rzrlNY{ 3zWgs"ߏټۂ񅮺;J`^L^Gy`>O<|/>!6_?N`ACL;ݭӓ>mǻj4ZþU-ޖȺ%RɊM|Ԓuw㻛[H e|}ʽw'|M1N:>,p)iQgnRUߒnr*P:*9Ep4wIKrHjU iPlE6bJbM8hjNc\5p0b짥vԬRզ B7 ԡ-Ml*i4S- 1h)x'Y5֊ 5^u dS68\Ӷ[I\226KM伹xs J5K1璌XiӸf: \5ŪT se=r!E،ll% 66eZI`t)SVՇSJ\qh<>kӵ̜. mC2JTù 'EVE6P&RPQ~jxr@cVwk94I7 rV~|m==W:0=?Yy@O,z<)+|d8ӖE;됐 U2SZ|uޠ8*tUY;US#1%Xc֨s F[&"ׂN(㯀P19hnn Rȝc} hhM?"`ۈiY9Ғ $X%)8D6}2$dJ쓵\Ti#8TYOZ]/f^gĐjJr1Eج+!kEL.9A)euַE͠tGlUC7W*KIoTJڨ/^q*9IR. \BoQ xCV*Ή<V~Ըo6ZgCAYhx0Qja<*KC(28's#ژ[?H89e]tƅTiƺ82 JE]g*p A٤<;bl a)-%jm22 ٕ]b-`P]-8+b&9x_ݒ`℺sbHWLՄk,2Qߓ($dȄWfH$X+#5Q74 ,S/5g5de vs|@. kN ŸC@~%P/ ւH, LhN44Ҩ gNftkJơ΂ŜEE'„| ( DUTzBb1y*"T%`vd7,]BP p&V -M=n,T RѢ%P 4}A](PSJXԕ$b生-]l-()(}]spp,f^>?ՙJ/%Fky"2ʚ%FtYk$ H۝B6zdVc囯yD:Ӌ@~rBwvńTE~ŬQ@s|]1FL:;tB5!`c#ȝ킉Z^4+%cP ܦGkR4_#`˶@Ɂ(5,{wS y zuϋN-D^A-)JI Vd2g'C"az `azOx d"(Zvks<om n*~tX:(`FUM U;VC@Vɒ3ALH)y&?Dn_طy'sUOYs`$̾~6Fx6=RϟbQmF\-B @5´d7P P~2%-=`E'\PlJuX:Hsu@z9^@|J۝KrFhK& I]K<7N ЋH(Y Unb5נ|%`%oqaFj+ETqŚQy …E'X)JFY&[ -8NE=h}!5?M(h Aj|d$NЋjOi=j8e_JE F ʭ"p(P*̃u=ˢ9KLoFMK}1>C`lR8$8_ M+9b4jː\ʏH]~w"C xq)t럲L0P5]Blkqfs] ]GP+LX )&fgM~\GǨ L %z^n{\^^n)v&7W r_n;3?}e_ qZ^}] G\k.77_W.Lo^գi/o_7] `?- OC ٕhaWp/<+kA=l$=e]X X(,omξl_)̧`;@{]l^-z۾]ktYmlʷ|.3םOGwלЃ'݆}iD4kݜG@ś_ڿ| L>mQ~:}X%b%Z\ V+'JPX X-ZDZV4J,5+DEX]  ֡vn%`Cf5JSJ[g7+ۍ-Wֻ.7, D9ڬ,S+k[q+ksXKdt59K.Н+st?nNWߺ}^Ez7]|)\ eۼ8Z,9x%!絳+\wl%}>l{ 'ʫC3jYKd7Jn1q+KZBc֒Y)γ?𝑰R ìU>zy/mѿO֫-_@;wCޜvuuQO[z/گoA,B乂}1XMGdoy}Dhld%y`U5k ڮ,G% Y X Z VV(<֬& BB` $ʬl+kˏ a֪>"68E:p腵, k ׆.}D'"vqZZG+x?kAi--fOa=G>,3O;O?Jz}v(qp?l}ڠA{Jww;{[+˫)W.۵kF.W KG+'G܉vs_^=)w/}Eycz Ӧl_~>EP~Y+Y9wV w}{Wȍcp G}8lvy`nlX<϶c~ԥ`1vgzkFaof)mX_no_{y'7g릔*w& 5DN狒x," aSI>됢CsII2Lx/.CK½ar}ƺ$s=.,ԑnY9%&!QϊZH;t$;A޼{_T # 6|qI+mM#JofғlaU\p?1[3ѿax[\%v78J9M{ \qO%dVf88!@E'4?}+Y.S]oznct?N4<۱a4bs^7\oQC]f"kr"֏wIwwKxt{8~S%1ݬxyor}0=HI zT`$#6g6}Een`_}辫yzWjeQq˟Jy\(~XA?}JEW`Gx'8)nmр:b5Z6c̀G]7J-Ta[T{s]똟~m<[~m1| ~18_~&n*K[OQC^<-[0&,l [濪W-*5\Bqy7/#2VJUZTъً»Q*^\$[,obh.>s߭5Mj"njl Uv1-?>fCoM?7W5OZ7V7ǻzvs(ٗ-wT[W]ePdk9~ wqXChMVM2 9O-|x <~ =j?B>`{iU {9\MC\i,5퀑^_r½ph)tbj k9|z1-j ÏUngA{P"$EZʟ~[^%^0=I Bm=T*.EEe$sk![մ(/#(gZj $sȐ ݫPӎU^KUL(ŸG|*hT0Uy\ zZJ9,J<<ύ6TlkbTcFʕywQEO{ ̾pOeB#UYl@Iz^p^pz^/8_/8_/8_/8D;%f:hk[daп|5>_[sq%dM7b.pw[j\fwJhH,"cABf#,~;to.h*xyoQ̓d!RqIh|ХyrB}3wf\0sc}.5ҥHD KQ%#4b5P5fS ܕQA4lXH(#.XヷEmht=YL::gv̏8RNPMRDOC!A1r !|_mON^RX"H$ BG}U:c= q͇BƓ!VG8H[?= FRFIj2 ]ЯFoBj>j;w<*Y N{m'VyOAQQѮ9Dߦ:j>b)h#h*J)^ˋL C߀|(ƝMVrj*e"I$g$,'lpd{S1j4Bs_6 tsHd4R*Ռ(l KiC׀|(z,1jAQ3q2-A= W/;Tq 6i TG@E%!5Ѝ^쒃o}Ŷϊ4Hkqkm[̑#킗-w[3^5PF4#5*/MA&Y9!iBSXXaȷ9ÉG yBhn!T[YnW[(jkFjĐbփyY(M  2- ]R 04kH}ԪM*aI _ǞPslUqB[O8ѹD_%%) 'Fvꄧ-w fZD5O.X#񲺠ө7Ϸu#^L}6(5)AtϷ&6tcF,eL360g}6g5g<;HwI,&Kͩөyoj>TNt L#s"5*+x%-թTP͇*:<;Ƙ)F F6OkK(9O"AYl. rDAmƊ1f x5]c\cNK^$Ω2ʒTJFE;bm"6MWT+"j3;k*HҊ2gA陨B)i0`-ƔjlxK 91F˛L$LЪE{?Gr$q R'4$ ʘ/inFPG|+\($;,pHHJT&5ߺoD| Ȗ3Cz$>PO7eiu~, *-}ޙkJODpF]v!h4uC!Ѩ'+WGM@ľߨUq"J2U J.׼_:ƨOD扈j>r7R0!u6 /A:LCi 'ZN̓| g-[9ngY1@Ew1^ȽecE\}ͽ\?]T8%>e|mԎ1&ԍ[0wYeI9Q|ӫcr2Q(`e$A1dp{\RyˎOj XuD ^1aF|]ww2d9 tjJNdTҗv'RX&>C ˃&) 9bPIKu1Ƅ/E`Η^X!G#v1D&GhFVtK쭑.Knb̘iK$)8<-җ^t^t=49q]:B;3,`7[\xEr|J|r=kMP6N;|⅟ѤtUgPIJByeRJO:xT">O@4 v1[fj=clxIjL=nnˇPT|7^ȇN 1&TYs>~cUoga  P0ˤ VĖ7 1jAib܍XYx@x SQx I)$c c7_WTQ02?>'wAx9G^dEDk+An9Hpޙ8stwf},7 <`i\S2Pb%;*iKUb۞n^䡚/ÇYDD NI>F-)䠫Sd7FMF5 EzW$95+S<렯Lz#c 38gnMw31Ƅu= ?]^'c%r9?p\^Jz~Dx97ӢXrزt& '`Cf0M0]2aEwĪ{-5ְG,r<ʄkxp1뭇Ckp>.F|fk|.[GNǬOsaY'C n`™So.էQ 2:Ƙp .">O=lTя1&\Kΐ=[+|nWS4 ^ \]w)F9j .P $0qK#i~WqU|2葕u-1ƔiS2CTTHIB\<R yݕӵ^B5JFRNQu#"X#7%FO5?_upi^LCw))uD{#`FJCr`ŨuNJiL5#~6aH=ko8haG 37ps Ju-IwߗmY~(.YCA)UUTXUnueh#2eC0<(EN4d`)s[>=DýDk#FiDq 1|C r5`_rcʽH4v5OpU=Vxyer2Ƅ`H0s8N-qK8gAi| rY{H'i^XR+,Sk Ni]} f<`\=n w7ne0:"0'vC 1N E_"k(*ոO ŭ {Ʉ`C5bRyIwO3K#[Z` i[r<*P шj)bL 1NS2r\CJ<Cݒ2nr±NYXA{େY;껬pkՌʒ$;Np5/)%kҿCBT[n5R^"}Wǭ@[!0<8ҷev =5v&o˸Rɱ\LHF.ht5yXIJ ʤ%Z.4~˭8]no?Y9oOzZn>{N3'r1[-sd7y&%r~YEQcnY`GTޖQ8~7y&e?~#OVSB"oj^&A8PJ'YTx|10[a쓝Ν{6Tuik?//V-~|?,N58wuZR!l›`n]:U{O~Lo_~Kvhpg^_!tVNgh<(6'7RJcb~(k~wV ri .S9Z! T Jq|^^S@n3V}e}>GED0:LHd8R@bb'ŢPu$Tr?x ΂t;~I5k${PVWYvR=Y)*3Ev-|U#7B_jn!fBI8yiNp]<ϫtY,]@PP@=M"~Qπ& ИdNՈZ_yV񑔉P8Ѿm9& "\Aq ĆF**ޓ匌 :gj[hlY`9 /e0.mAx'<՛P 4vjn.M*AOx7Of|8KEjb0EG !5~2U7nIvF8ޓcűQqgL Ũh`T'$Ǎņex=n+1' O5j@jn- jJ gN't! 7{ PS?>DR9SJpbiy!5 T 208|sp*+W2>6T$tĄAfC44Ƅ Jh,7 r@!bc+r}>`Z{C:_3aqI $F qEC6F}@ӼC: u}DpP: aWp1wYHM4$rv:B (lF@=y+~I[}S6jy7zuS[angHI+KrN߲|0Va0HV8Ꭶ.9 qĜD$C4&T6 KKx_񮳗2pRBc8Xrp2$PgCGP3 T1`#63tO1ȭx/KNĵN\SJMǖu<% 5v@མmsiпJG^WQnӢ̿^[GUQ сcA3yLbMbQћdji߁򞇣iC*hИvOx \,gx;KXbfą0(< INCC {2 XYV=,CPf38K9xٮF|i*ts(e ?SHXb?XߐZs {lhzDZޓ,K\>d]XBc#9N~(1a`^_e <#gPfPћ= 8AK'7eوy[ܧqoH -շR>_>C$HW3Gwy۸̐βKNEwH E}["FDsg" {N5_}߼th(0T&%ƝI$ݩM$AH3iCcX+]#*Qwmj'LSt#CyYuǻS(qV%S2T#:\i1;XCj10sH{4>ȑ $ =ZP$e]FZ9J^CV.-ѩg^Wr֑*Ph{DJPĜcJ!γ=1cCc`P~?~z̀?W /&-rlK2(78_I>.TIhs;|qH[@r܍dCc`pvu<,ЪR  +vsu`œfq̑AC#*RoOzm,'DŽ㍙jik\g6Ͽؗ?Ǐ!O7nIX-Q(n2 ܇DZIm'^_Qjv2TbX,O~̒tߘM}I*uLz_m:n~o 10O>j|2!r] ~Ρ@5t2FR8簗 d:r TY]*/7m_ uAm'y|_լG- uf7qGgгyaXusN 1j #sjB R񘺱y2 CXHr OiWZ^ȌОYA'L*ig% qvyq,r"a_ܩΜRW%3SYۏG&Cߛ9.Wnц̚pzV6u7CNz+3Mecy/ZʕrZ>v:YBFm:Գ]9ۢ>v?ÿz|+ht$?s53?6AB'Jl=YVtINh"SvKR"Ν9{[Ụԅ޹?~zni J`vxᬓn+菩W.(-lW:oڴV\Hϝ^hUS.rkϻ~sf0qA<HMտoϟ׭5(&0cxN\K:NMTU։='ɞľb󶭘MdK p|LnPB$CqOae\ zn]~KCTRVbLxfQs+7ahDj>bL(smM(k{6ä֞I,*)D뷗A*-҂!mʋ/>;&=6ֵbL1R/4{4H uefI 7Pi@cWP@@Dc_#$ϗ=/,3҄mEE 3sD2MI..hj2ybrMnU|Ʃ_CA筂Wp0{el Sm3mT9Pၪh>pD ӧ;Pįpҳ!o"fwxۣn7 ' 1ViĘB]Z4 ~r2*4vN[s&w]7 )(?ܣX{ XR~I?)JD2 f2d4LVJ7W!zl>C?1&?}vxM~΀n7@IFMF" ILZ! A e\'2⁧@ TA}W6`5oC1jH`Qp$2%!M t/z/҆5zSCŘbc=&iWhDv߲cmĘQ&+Ex0>e̺)$s=Kj=x 0jb%xq0۸>Y) Ef#R\61[z`?\`Gmz$<,>1FONB&'KTģTXq/7QދYO"ZS~>0bL ϨD /s 'Oؒ` O  ?;G5:'jqC2 s~zP.oq W$"jHEI\B˪ RHpLpfMduu)L.mVA-} ZUSZozι}琟R.CtLMKom 8d.?ٗ//xd9wZK]KPmU͝+]9>4ϝtf4 ( JO,wk̲rS07ϙxu6N%+ŘbE<~fTPI{aSWwC=Q:z 7:tAYuvp+~ sŰ`moi,{]18zp|7_Β?zo^]- yyōL}#+u#Sd㓳OjU_Se/8w O5 X+I#IRi3 ^:%qdOҐᶖ ڟ/!QZ#~]RzP|9$3wqIjy~SNs_#gdSI_d./70'rk/"k'Oa2\*7-yB6nl[9V]%lL'Է;Y+l,ȈdJE3]5R󏨛yr6~ 3;l ވܗNKXs\2M؏xp_xˈn0F u 8o @cF []ϵxrH4'I:GALB"&  ‡5_!"ڈ1\rvdS#;9 lA%'fT9n=Z2u{ xGFػ2sGTcV'noŚfw $TX {pH Oay/36I LNuHK>35`cЫ"8nnn6~LMm0?؉ӽ0u?+cc(t¤|QfyVl'Z ~[ΒZ3džsf W c8_C#3|q`&qLf 4=<5fl)`a\o{UJ4jncTFfmʹaǫ 9]Wq-ebm)kw44𦕕`,7Œ<5BPZeQfwcV1[͠_ګ5q#u#φgZHS/O:!=*t j*W&2ԙ J1x[X,˔sl;nJ˜K~l/姿_[Sm=A\cLN%߂3~n;OVۙ*oXboۘ5Π51_X._3&ߝ6[?6Odn$vCWY sGݯ7|qll:Hkإ?mŏpOɅϰݵ/Y|}&o|xj/g\{^ʱ[nlKV%0vכFd nj3xc)fa ,Ӛ_#J;D? m94[!s#$^|k>`*~0xmA=8F0H0B7uі4|̺u~T!$evUIYךY녊y JJ4RUyiVUnŷ2^K.,gw_7b<ƒY6Ax]%8Θ ,xGT:E.*gRKF t hǨW"1{Aū[DO$ZŇ( ºŇ,1J.~&hj0WJjV/{dh.D=߫>e~+NgԘmD}PZl! $G.Ȅ)@?3>޼ |O<aF#-R-bp)5,C ?ǾC#1xyNgj׷w~V_XRpcג .Ng̑4>km;Q];Q0Tl1۵3=V"H_5T8l+4HkR: O뽴g-~fJ 79Q{fKڰ? f|L젛s6&Ѷ4i99+52FQĬE05͐هRnkp,po&Ha\@s@wߗu5*d32N0[[BG`yyл tA705)p&O~AP F{Okq xو$.'%ᦤ蚨2\Ңohm7pߎOq$Ns.q>G^&9?PJJcKCtw֯6&9̴YLN"E!x Ȗ ^[eu( VCzxL'f{Sh4lh]Y]ii2t6Oӣ1]"Dʔ9f]`v IP7ުWݷ6AuCrlirDsSQp%"94 (5 {qz\`mV^iwxkwpo۩]Lr&W&!יgmYTx<)8XGOXvRX4),m=^irlnO9ް'eDqKBܦJN*bDְ;-|ڊUR#A;e 'kV8b,&!Ph * `ݝus3HBOJb2K##V"e4Nro>g!wB= B11 #lvM+_C7?1 Gh8 tg|st J<)8J0ncE&>&E<`U~14Y,u) LqoV,oq_A+Fz[e\/{6ki-!~[ @&&u<JpД ]g1nQG%',yg7 5ߓB CmLb f)(R1dMqW2kWcMv$.vʬEh~C20셍ٹ@r 68 mX#Pfb<,ɲti 3^:S9e(|~>_5$Ull'HV&tT ʑ!IuYV> ?/S[[lXYu {ț߈]_⒊)$e{)SYjYýD( wEJ"sm Ϯߒx :mɻi%B %4kYa)KǵDmX~7[{B{ ^L{0;$`{_W1:s~m$ycZ͏]޺&e)4ώᚳx9ifc-(™UԩlL;ʀG;;JѮb>s`݃^V%6>(5袪v@WfkbͳNa -IRqq[\c$psNE;SШHF!GkąQ\-f;&p,^$S ϧr<@߈?{+Zl:k׸("'Z仁PtCB;UR#!nVi+<0"=4L`n8ʖCVL}|gϼ,ԺE#fL%ԙ~}{%8!'{as_geͫ{/1Rzp}k٫*ُc/vJKQdX#-ED׭Ly9^$ SK:U'Key1zP\,L[SiEd1؀[.\ tE[~Oa<&&/C9l 㴲FVLzy[x&ZeoXX4(҃2qFiˇڋ̚\mD)5)!J;d%<Ge̎E_LF'N7kNT'Me[1jL_d|iIB5%/B=d$,J}u j!HWYaG~Hd:PUCqBa s0T U}XB$(L"F !6rEH˘/l*nfJ*SK\.]n}@o }{pT&Έ*-eSBoEF"=8*ǐ%,,'V9pեAw,"ga{:pveb;AwN|ns/lKK$ELi%tP~[ j />.> dN_:VdA61Ӝh p_z RUp22AȅNjW&_=<>;HSUNsOXsVV=u#V3-:fwRskQ8gsfhz))c*}%K6BxA}zpT&(ń̈́x amm"N_. #Br{pT&'igN;C%ʀB_Pq nϯxh}`˺ErT8 3A²]v})^SQ*Vfċn`UWHJ>VO!gߨ4Q821`o[tjd(-/W>B@lvko hh*Silre=Z 33]v93D\?fj6'$(ioDlݑ7kV~xMOZ+n@TE\ D1%F3Q73PA%{K+DN "LK94#S"ܘ  Ox3 =8*MSjմ@aގGye\Ux Xv#˸Uu 8:SuOYKW&3֭'@uA0ka-;>dx4XSb,qӈ@*SmY_45jmI^ \L i5=#֪֭ lp2F虲u Cu1 %B()A;!Ԫ/_$j́G<[2AFNuNu_:74xaԹ3@Ŏg =pL5t,y0%+7R4[|{"h2q4*9t;\I&B޳+W|۽{0C9BkH={~9z|R>yL^tw]f4fF#65Pl?+lGvd ݴSV(n{Uuu,H"B\S3A>#oE4IDd=)"5YjcrѰ??>G`/b _T9yX8au|h@ח9. &O36 ǜ[`06Jlae)='!l!V,ջy2bk=ɉNE\[內Q -ok8oȷ[=I"-Ra F&T墍GU`"KLH3|t)v8}rV&YW9:FZh06L`4 -n5(U+|S(ɶeF3:Iu[o B ;`l8hk}q#ε·(RMá#N`l8NƅRmmy2'_>G$*8&^9*ڱxz!-o.$X˺Cm $Su#rMƥgR SAr$XS.Cm } P$6~VHcJ8vO%$ǜ^Ɔoۧ@S#N|-dc\)U dWʁ=Ny|y:@D83f#[I}Pѿ^O^Ѵ;Cyɳ8X a 6p+>O |1Nj.l!<}aK6LVu$bTfL4fRXA10" FHHԽJv1Z%i7TƒMW&4EgvicXh/T}Y@*uNYv*J;넂zxE=2'z/TXyɛ$NL;jM6Nqj|'%C`@٪xu.`5VjWTw F"'3Ta h6Q][[kPZۍht@gqA XQ^%( ͷ՛L08,|/o4ȱ0eMӼ}i}!fxJF-ꤊ꫽=7gzҝj酚4Jgi*M!ϫ1h2.!- @?lj.>r L@`WpCE1ҡU ɍ6}|AXfl@~Q0oOv?0| !Hبsp8&RоDa?0Z"_$74ьʀMn0A{֮~ry@JQR9TgR=~*;4SO4; 収8y<uqm ~srGW۩7zmm[;o`,kh&v~"jh2qɗ 8bPȸmc j`"!-8$UBIÇ160ʌ$ĶJ9AK}qqt/@ot5zاFy6[bؑ:P&BOSZdصwvq3"-,bD1a,aHqHCȣX !Źvk(t1  껎I:أHXP)-ٔ;zaB<)k;m~^9 FWoÿkX}@m[^R" Իݾ[5 ),?0c0 LE &5qUd-rQit:8V1aj!H[AŒ06qQ[ȅW q pJD!^^7:(W~eG}\ڝ߃faFvEzu3b!y{ ڬnQDr9f ʞKZ |J@. k$Y,"P<7ƽ.)/Վ k?Zr]zx+4s{GS'3Q~wrG~Ym =b\imzՑDa^6(kFiv>},ӛh\3./m0+WmGgtm\U+#1ya::G9LK*_:CX\}raU;ʫ.jY2; NwI$V %5ķ0Y3C9I5'+iuqU~#h\KnBfU/ցN3IT?jF|dr +9J2dw >ɽ8D[qӤj*.J~r3>8}r9~\#_d_}  :0?f|}9|u=Uh8iuZwnE8)i=q4Rnh%\O߽+ ̀]UǗj!Iߵ_jA"i&g2l⼝pTj|Bfrה JgF VQT8g˦pz8| #_X~%qni%(etC&& Uךr ljH/\B#Fʵ*}h&ͦQvz/̶g)*XdSUiU U8+K!D(tm@D(,bʐPHk-TDs@ͧywV~w=a1 Gl**L3~:ru \ N椻)ervk2Ѝl~ nQ#j>[!f[  xfHa(rĽ{ARHp-|N>sl",Xw5b[jyP 9 ft6( 9Y5+< @/0ĶWrS[)ee<&^즧''@zyRKzh&0RV^Mc^7~Q+zwN!͚|>؞{ܲ|RQvhڄER2Nv2UEYχ Y.舲I5 8a?]ތ/m}l^3oVV\KlS1 tmۡ-+3N/8.ATŪV/V0EM^2u`v[SSk/ot<:Q="`sDڦG֭QW |ƾ>jUF b{e C٤Bʓ+F)ԝqsM6W7j?<^` `U~l_6oͫfdC0q:nGzj7MG睊Uƺ{. {7eyu#NL/ m}V Ȼ+xaZB Chu~ڲCڴ|a䚧6.5]֠o,y2Kʉe.:1G n=*G}$)VȆԜhWDk-\ڼ ۺ*Ii@i@i|Vx_ c}l9c(T2n> I)+vcϥ x ьqFN2_GC(aV;^Md֓|397Fq+; OP$SBnBޔaEVS4ʀ2 )bʀ2 )bʀ2 )tm/BA/\C9 U!܀$e绘 YLX#IPb6q|=VECy+O(O(O(O(O(O(O(O#RYettXFGett]FG9\:K_2:{e2uL}].S_e2uL}].S_e2uL}].S_׷NvFnIw̥̥&s>s\/.<Ngt(_^pG/]#mlCÇk8F*B(UjGkc*@kDȤ:ӅR*% q,t^|q$Bmi_=;M|_v}ALjI)/X_ If(6%`䰦zk ^a]R |;wk1]'OX[|u¢Y7?cU*Zßna1*wky̞kXrW n3ds| ,Kّۤ_S+Ziʒvݮ՛R[TbzU)&Ý7AuX"0Xnt)Q_byN'A ~j7sqt=eK?Kk!p׺Il,Rd.^>diyBGw͐hyU*^w3wJf PFOjaw?};͉{kizexY]00d7vrKk_J/ >2L+} tБM&S\wAy T˔IOO;ՁJo3$e%| \'InK*\!x0&jrɲA@zaEU _,c>Oa.b=BjF $s9_]VHzzPE#EဌyPnE#3$/i]֏7`RmK% =ܒ󳗣S]Pe6 S_UhMTi;N/u?O-MZ痋yh T~]WR;$J 942DŔZ_d̓`R'XXO@2ן&7^/ ^}!QD}'taV".&Wi:9NZս^;2 ɟ1|˛?ky$kI' 5rtMx@fNWYQk;=ݥ.=vu_M'_VG׭ s2HH _Bgt H ҶE*'>ÿ"$Fbl9{|OTa۸'gaOE&!]D#紝'N:]SN:6)1AD~.*ec% eHmUWܽ'>wVQtX:Nu,l*=d*Ssj epRJGwxVe=y779&h~CZ5i{WĖSx>uov?EsU|6FޞMn̾˜szBmc֣;?5>Wv66SmRtu :!siiqW;%_:X*{Qr|ཧc ŭzȱB2Pbd~L˱F!"%twiz:}r =섻D!7@#= U'`bX$H;̈]50  I.dZ2wLC#ڍLC]ԐHߛ☽{K<Šє =\VxJ -Řړf<{Wr,pd*i :FR@0"J K>JU=-ۗt*SJhUҒ:F2RqcFWp5Wf,p8+H_]m$ W^]m|Bʹܷ2Q\WϝzB(n.vo '׋7+lZGJgl,[iA.=Gxʴ;nhldhՇp:9lf+Pc{QCS) tT- Ώ%vp3,䆙.,4 E2aaJOD1]{=jQ`A;h^G^]tmVsI_*6bMR62 5N0g]X!b=vd|L5OIaCGSJR}zioJӋYip֏[#4J0PUĂNfӸkHBZkcҭ  lB-~'/'iެŢ E-Iݛap^I~u}fCc=cwwVn:i7e0v]VNe1okl0Ahv曙%v\Mf'^^8 [1 &ظ "ˆ/v~vyΖ>!!Ζx:ҍ-762G,:*0_!Iu2c 9WL Pprp>mVY}KaVL6 F!IDct#E-c 'x`PUeiFZ*(1'yYrOii9LY (Y\"!L܀G\c 2 @IYýD(M$0^IT QҊ5in@T8&z 9X=;`5#YkrZʧ"!'9iUqzAi|$0>U6DIX+!˨txڵa"(qqjUϚ"!MSϰvGL% T;j a]`bG{!rϥν )Q!r ?@Bㅢ&r!┓ +7q{ ![n31"0o pkTg4^Bc 9y63T\I$"\>z;DBʍKɄ2KF!-b=6a% gxFE23%"r9 `嶲' xJi7q 2a i,f.|t}Kb"e) ̦] 9i(2/!Lh!a*1 ΂ÐW8"ҷHa< pnK1~%Ab+mG4C$0^[adXd !r/n Rpb*i@WCJ1!=xEHv֕2QZٴ EhDje=7DB!oνu-3|JIaHbg^& PHqP*DHÉ 97Fuy40[ {7Au!? 8z !:f7 T@Q\ڔ<|2U=q~=}!ōKvLK5Jmzc تbڏD5K{a -~_\/l˴>[8?[~^trH7sgBJ0C0j߳0DB~VSǑԆA<⛍ / !s~**CyNEZ@",: xC$0ϴtԴHb^zUE,†5?DB%^bz|ӐƈR{ !@$+7*_W?^[[IxmmH`$Dy2)g*ow2K5d1Ϋ4pSNso8uds@Q=$+T8)"!LdMd"xm#U{C$0^[? ̌(Z4%:*Vl{1DB%ĐʬAjX}2aI)"!륾[_+PI +*^yH`J&9M98'@9d,*zH 9osHK 9rABZ"de !rύ磊PК @n{ 9fx à"Cٔ=K$0ɩ)SSm^DB*CcOV7DBM½*XiĨʠR`䈕`U}źC$d0^sN̈L$Mci0 DҲu48Bk&؈ ~ |WI]6~8IjR~8W@w%x&u  \UҲ \\Aݵp FWI]9m%nIKN \!\i#鮻g *i :FJg)m$ !ahu0t?4d QQ׈$jHNIR>MZl>$שڵujU]pS \}0S}1H(bf>zڟK/]= ?s^Е}]YgwƊC++F+cCW@kKo~!Ũ;ln|}^U9^89=mPO' 1y@Pe[{ E[_E +}UwIg.Yw?IV)ۛ(W׻7H׷3"O;o]}%ZAnm୿+]2[voZEB|짿%?1 G|jmmN0R;lf!ozjh޽}KȪw܉WkMe|\XQ* f8 E 4|63ya]Fr$luzOG;ߞl޽;s,M/,)NrlICIZ+z.JB,]iڑ+2iCo[j I wMU6>@9yxYs_m~3ݜ >_Lgz$zE(ѴгnEjU}dLjڪ\ #:ufUNд1{&h-Rm*SUU׸Jӽ.~ NƖVA+ŧy!ڕZo]T i{7hogJ}հ 94DKfqc*Uε34Iۨw{u6;D :Fme~qq} I5K[[jV>BJMU6@Vg()Z"J)\NԵHcfC Xe]!(dGhOM6deGގ`f#_f//VQ ָhc5>#|p.#hҰA0( \ۢݵbF\*"͘uh>(ؼ($h{B1M`Y;kL0-U[L[ٲ-כZp v`c{{Y]`!0h ƛAy@xc#T YiV2Mt%C RbtLECN`GeF:#.3(Z R|$ Y}ȼ`|ڄL[]`8 xѸ<@= KH`Ge YLZ[]x $nGfp6*FgX">!EC}^jqgE3ʃ d-Š.k Hb"҇a~x=H>1di"YrB,GSе"(hC.O9{Q B*$*KtPK|̡LGuu $$LEQ{XRr,AG}Jh v%W@@]BZ:R([|PC` &$˽e;V qPФ td!.h]a6 3*I@d(&J+1A2~ȃR!*8vGyP:5\J ʰ"  De#@ A1.'J s6B?vXI3,I£j4ZIP*Hxn -4VӥGoU4I\orAVmy6,a* z0dkR@0X~=zyۼtYΦWe:.g˶\GIj0hfU:9 6IҢGJ$oPIlfkЬ(z֚BT赫M&zf ScUwhU\Y[Tڠ@0XAVT@9 Z6i/zBL M F:`9XJFD9jjEۨFPrmhJ{a(Э4F<A4Dm֍\4*w. b!C1 PVRp* iPb鱓H_PFU~B:COW4"坩y0B.8!=\BR/Ze߮jp֢\Z۔NŝlTTT5룡޼~㒌Sp4 5fF~;=&61r#)6 &R '"iE/-o'[:ޮ/Ym! cn>d tZ]-O^ '}?ӲK:UZ xj\~l3LΚ=A? Up:kuuQ(fK4x뎍:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF5uX 4QpᜨNNlyFlauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6QGAKs@FDuh=n!ʱQuElauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6QwqHFc\ƨCuT"Q%ubuبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Q:lauبF6Qgo:|QO7q]]Lگ,gx4`<Iy"P@cE>z;U;][tA{HW5(@QI__ϯ&G%ptzMuoG.Oy\|-ү]Mr߿>Ӳ Q*l1dX/1բ "')s(+Z=Z5ţ~.܎.Khh })==+uNeuZy;mj>\ʘ2[MH3?/?NO ɾ*rS#| ?t2. *jVO1IFxC?.FNXɃIk %W$/2['tDWNEAC+BҕsC+pڃ%~|'2]@EHyEWk+B~(SLW/wd]p {1]= ܪbuy[Ls/u&ï\7d1EwNޕ]NiiLL[-I O\~Av4z[lԙty 7ډ$B(xt7; G&%L*Vv o֗f{`[T^Mx 5xV߶lkmf[+RK7Ѧob(LѦ'՛MO / Ĝ pSHc# Hq \ZC+g0\!\p7m @\h_ UHvupe-x'EBef٧*d13*wrDͦA65կnņ:sCwLڹ%Y4qV7oֻ3VK{M&'y5ѹV}f6}fwj[-{}\N?0&5.ǶLw^v~xxrk Q$SzL^?ԡc[I)"#ĸJ1 \q \i);t R2>•К(#郟@}v1URyc+} 傔r`WW1e ּ?*+z 5.Hj-^Z?Ѓv~`\m%-H6vRK-JpcPO${WA`zWA\W -E*HIWGW3WsDHo aW -%*'WGWKX V?*З*l'W Fz#+FfGpF?pĥ/p )FiA1+U<\)WWIi V  n_p+vո/pjl]#\)JӾ+X"  Ҳ %]#\i*幝?D7@ZUvW}+rQóV n%.A WIe;R[jש s*zWA?yx;q \CBJ=UX&Gp7pU/pҲU^eJc# K ް+4*H:JҢvlיZ/ت_Z :&Q `$ U{oa+%z0^0B-Vf 9$ygy3xKgr(J;Xd cs>i40d-iΥ׏kz עOOƮ#)g$VHyC ߼E/ϲ񳉿~äT,gQ*ߪfPWAΖ:)Th&㿽Tpn-rpWE;o7v& I4 㬾S!@}մŌF~l>;7U%4 \] =kQ[lRJ'/|].ue>+ٯ zQ|?a),:ǤO.8"A$%\l%X&3,%aXRJA <\{?gЕ0wtU<ɋWIt:5dgWW8% ݓ E4ʡ\ -za 0~591}?}֧?x_J.H\|F x$-V[);qhՋ|( }:?-zd(QAC0_ϯlčQs_Y Ry9$f "L~^vU- $<@I:@0'ʭ/ڵy t&Q`Q7.'fE{rC)8#+tR LRU<OǙI*{oCj; !/,,~4[_>`RG˗׏뻂G(H| H152fD{w-k&/ GQ% V: ,8~~t &rTKFO *,T ˆr{-byK{> G5zuCk8Ml#;#5dr6oAGbzC"8KF7hR:lx$*pMϾMԙ-Jn¸(LQ4o U|&Ss+?G!A<^y~(Dl1luL11-)}@fV .p E&f^XD0-Hppm#gm1ӧmd*nv'*)_3{H".58M|csp#mmI6RY8|D30UR"Nbɀ6&*Ƹfb(47lTyhV8KDJNtJE$2 3R5b,"C` rpcPԗ;E䰌?QDʍQD9ʩuI)I@Ǚ:$+iBD&J=\'vPQpM_51$Rb&ZSEX:Qb0NM%1.MPԥd0n iSVJpޔzmLS9>hT_ U# 2:?%W+S-D7ͅYȈy4XS?$"E> Ɛ]8 9,z]{t3|R䇋w 񞯋'zG{ԝDa GZ%I jڑ2L>@b':^<[W JNxEƒKS~J\dlwoi^mśjt~eb{94,F F_ $:==/.OXa$⚟ld[& yj|ѵ |9?{M aMҧE,}0H^>x*?_|pV'g €ZJ i#ʜJyKt⏲|9w~yӟ/W?}[L۳޾ x`rob&[ 0ˋPgl+-7ыOO8aeXMՂk%Up׻nھP J:I5pa;Tv(NM 07Sq9A>|)AJrhlS~)wo^{üxսJw:X9N| ~dҕIN<[b2拮V )%ku@]iE(IWlyqEW@h_CJ־D]ń! ꍮqt]bUueFy+{+DW@ Kumt .Cƪr]LUrTrZVrfClUjj4HWlBj]B\㍮f^WHYUstS9p鑮2otR_t\]WH)[]5QW\kZ[:ˈEWHDuj!St+L.]"]!-uR*ꪁHW„7B\፮V׾3ֶj>uƮr_tR ꪁ2=s½2oڑV )mu@]K ?B\덮V )i;vD]Y+1QAe #-'uR L~b3XXW]ƭ粴UhiJQjY]]j P0i2:? Pi{\F0;M>k5L^\|aI&VqxFC\>K8Cb}|2~u js%~< N0,b Tn˟i&Sđ e1eh%i8Gy]zO7;^)3P* _}1Bz7e愑 Y(bX6̘dq"X NEaW "42"K7˜-O]js?)_7B _.cm.A9AR(ϥB붍c ebLbbOu.ِD7E1o޹)//e0/a(a,9jj{|Y썰4㔶3Dت:UL^,3L%'89+Ls"1e~ 1!C`?t޼CjzB^S[Tڱ*?3rښf}1nNL.:,= qvܬP:} %۩|/~>?mԤv۰ŒnOY`vۦZ$! 'i]&ưH:D$q=CB)'HD8Y!9M'JknX᭫5m72[O3fVZ{1aXwd|_ v{_hVMDn9S*&$cxŝ A短&XeDUQ9!*J2ar X^_ ׾3~;~ӴQgfYL>$$~suu_eJ_CGX Bâ0'Jx;lwh-p׻^ޡ%,V+6^>-UCkPR>J>̷[dqElՖ9Ěb5Vu?#Ig.Ɩ9I}?kX}^,s5 Ɉza8|^lX=I g]We U;㪌Q[{eekєƭt-Ҵ'}O>q… GWk/Z>]WHL&* bAl0GoF8hk(H@|\A=eX׵bh6P9urBXl1׽?-oJ ①#r0?j>y-[Ֆg)]٥FמSOjM./uC{e1 qSz4,7=Uė  ehӣ<[n*鋮JKh;D]) AO/#~5ZEDuj4]!'B\m|ZRw]%#UueʧPC(FWˬ/BZY+ԴUuex+]! hRYoGWtɬ?] &X5**U9ZcU)JYX%tE[]j 5a q)EWHIujӚtʟZ+5BJMZ]5PW]ruFW/BZ+Uu%8Wy+3+BJC[]5PW+eG`-]1rUu']! q~LA 6QWH br닮V~ ) ×+#>]!3֛;@[b(e{gHӃ ?A #euPjFWߎؒY:D2ɪŹhSXUJ)4HW,9FW+/B1t]!muD]1i ©7B\a}j^w]!e!lu]qC tz]!-uRJꪉV".K:5l%TK(B+ #u\T5-ެR*gm!6gYgjy ùW5GYWp%UhQMGUz4(CG`x1JoƋ\PjRZ]}])KZus7iuuִjeRt]!.g ieBJCZ]5PW)2VϳKBZSꪁ ++]!eBJ>/Cqiu,5&myTWh{6tUncWxUV{+OYWq_tZ]WHixōO⸮#FW/ZIxuLj+~}rL+jJrZjdG`I7B\}JVw]!jǮ+E#]! p5 iy++Me>JSK< ". i%RVW ԕaT+ꑮS_t]WH)m2!$HW x+]!AԬ7+d֋r]֔UX^)\F+J".GijvgPЕhujSf9W +U hj]V]!n+Մ]WHYxmUst3+e]!}tڴjpR\"ě5ĒӐU\e̗b?̗Ғě5hJ^UZ-_[z47=e=4G#z+|Z]!x1RVWMԕy]!pSbDWHKmu̶jʯ=ҕuD\CHk]!+#$n<SB2wҴjR4vJ+U]!}tVWߌY/;DɫO=R\&/G)k6,+jլ 퓮X tJ+jRVW 0ꑮX덮pY}jRw]huH]qEd ھr鋮V )rzUN[ﴧ9=pU Ϣa:piͰxo6M۾jJ?EMhtsEm>? .L(CB6ӳb.x(6r)ɤuU # 5$c 14%_7r}3={5y7v{h2Í7>n􆗰}M˓`{xkgwxzxww{uw&\Ű L Ut6;6wkwx G=u8Z;w?_˺r,=&W[;oC^ҍhv=w(fQgC7h#@/l{ӡ?q`n?l?]%npuEI`1i >%Ax {砛QMhgo5h Ӄ뭟[-"ΤYP }%&#2g6J(]e_&=|6\&XS7SLS.x-LeY"\d&%p(ʈ}(s$̘!&6:Ƃ8M8TGYF,QGG`dczz )vw{tH?<1{\'II'h,"J%$(,(B8ՏoTVFgi1W1)8E61RY e".UTR),bI(WHY3,Qc^W:Q"V"̦:@͘NX%.#6aoQNt\k(!J H4pj3)0$P;$Lhi*;QCw^քI$6 Kw*$u w޵qdٿ"1ua`fqH d9z`3HDO,T-Ѷ:-U=Nӡ1Aa٪| t^4w}u1^e=?$`8-lM ah֢mv E+2BRʥw~"p3M\q"KAVie)*0,$e%kբ1o^#!>gvX~ޮw_tOO~X]֞<??= t"4 oalN4.m\ 1ҮʈQԜ҄d$F_I 9.n>{O3EĖSFV(S^FFPIQ#q0ٷp\%}m?OhVpWǼdMݻ;/~j{>tZ(xYE:b"N0*"oCq)D>*#_!i}Օ:ϳ>1)O΃;ϧ%KNUHz2$>X)x>{rqfqF>SRLAT}T':-Y Ƥ? )ˎmz0t^=,zpq o?si z0Qz0ɳx; ouq}AW7 gf1ژݛ٘ݛw}fG<޼  +<& BW w:JǛ%]992!]puEr$՝PBGIzc0J P`Gc|Z1b~@իePbn`9=Z5ZWãΓ1:zx_QoW}p|=~;SՁVA]Ofub~uIaȊk翏\]]^/[~qڏ_[.`HN܇j[˨zr"l zjb4>$d'kk՛W ;Fq"ysAû8 E >x8|kf5yx1CUoFe|9jȷ*%,*IZ*2[jGi-]__%$F|2.=?"?劎ƨ"Kk޵XݯB,0šV IxK\|;T4e+:$DyKLȶj8pu ҉ksD<'*RIEf-H2{G1I$5n );QƪM5P>b##+$V #_GtH" (-I@S6eRHE}i!!>iMՖ(85"Q(X'i4/fFhCD v%d)0rc`䒳b+ ])7#c(+٥$ِDtȗ BѸ iXOq E@ VTTPtB[Z CshR+OݤaDM4}0h!ƫAy@8C*}t,}V2zLrt9| \zU sc Jo,rKŗ,J脸?3!H)DN+We2N3]khZ=ŋ`C%!YZ{[{#fH܆`m ]KEU&USY_<{UbD5ʁd٠. Hb aNvn|#O*"9->J}}6B@Fx4= R߃b:EjC$*rPK\HLGtLAQ{ XRr,A 6#*ZtBtXut K)3e6hRH cduCJג!q >r ~-6v>l(2zvPn}[G{|jzst"r/R{BqO~?}?9oxt:Ǚǧ= OjGݣoR3}@?o4H^DN{þ`2-K[olwrp;VȨMo1-w޾ׂ[uQ I߾ҋ+U7V+y骣tҕ&lΗvvcw:J2te /] ^ jez(e;AWtenYUAtNMVNWLW@ +ANEWg:])${HWZ teG1tM+5uJ뙮%]KFAw-n:%bM9D ׷!jd𷭣C*ۃζ|?>:{-Nsޕ܌=/u3\gԍ݂kfvྜ|__n9Frs/m),~kjGOv:l.{CݞSڙsp-o;('?bByQpI9D'R֔T%֔A|fQ pUʙ2*n)גX1Yx tΒHTԂ?c5zNYޭγ4265u@%U|TzsS0k&-7BJ=Tt0VSݵ>ȧUXmݦrm@.(]߄?U/pm\Һw&;{?0<(~ˏ6 5P:=qG9>o%xL(_GIԆ/SVZ`p{[8gB|*ӈ y E*{&+ H]*!!UU O|0v^2%'oþQ/ pHT=#n;IFess9\U^TcUzEd(6*q zԨȄjS>[_Y*M PoPVQN6^dT(Q8B]|!0U SrOm-!ȁH@fqg90]|bզ~ru82I"a&[*)*inRTR}U7:\D6NHz*ao`S@Oyڃ@隊E\~QJ?8:(9E.`$_a*O,.aO =*FrqE5.B'T42T^/Nqt3u;N*U<=ʗ;{ȷ'Yp~OPoA׋f. N\ߎ&i{͖vǮTJ?.iqu$ y%T &4]7& j0,/jӜ`7΀I.YW&^@a vWR^-:{QH!R=*R<&}"Q|;Iz$Qhw*#Xj.>00ixFDxH:bly͟4%/RNwI3PO?\T&5 |йjiD`hkiR2eOoFBrKṂSvprȏ)(s s1[=SLh>@嬽ddd{MrϥN}0H٣Άh{QVEyWdwO,"Ы/,E˨)hf5LfmYF hg2j@j,65j ̾K2W4\Zo(M+FsXE [-ZwpikAD+DilUsWxؽ\\!ڻ7Wrd;2W΂wb K\ .9Z-Ya)y`D%̕5W^*s}j9sh)B5W\uA)` I,5AH_7Γ!VV)EQgt꩖ ]2M.wTu D8ĵsQ~/cynzu3m/ %yN4]88 F-xZ\jLtnXqsT[zz\jǾe-v*c9ǾE\QtygF 0ꭹ%OTEIMQw d{:#)*0< E H%Ut*&X =eY]UebwO֬YW)ÿuC"9MbpT6%wPUb)}Z"){ܩ֑EؐP˲Ͷmw#rċ4#WPUH'ߑbL?nNV?!`n'vFڲ R~Bvc\d|Zv6,DFk֖#Z=ts(mskb$o 1dwĆcI4Dx-q()I,eYȌq5*q}˵J}1lŕtCOGi|4h0bR.;=PgG>|rV9fM5Nu}sʅ-A:L e&35lF\ Mql %O5[FTU Lxl@&uM*Y^ ??lpI7V\ 8*9#9Y`T.H/,YG0+txKWSr|̠J~f٪C4|0g I-ŕC٘hA8l˘ØL8%H=]ـ7LS I`،T@lz:4WcIw]M`^e^L6G^125!Za{03Њ,Q4y\CFBE)gQ,, b'EYrn 4fpqo>ğ,nÚŒ iAkvVHB_neYV(+D}irvgу3dj$Æ )W4,^^T(Z5b:X7-'` .WՁHN-3x8XmX8osAWtCtgncY!c(nw憻 PMMtAuG3f\suˢfSr3| t S]70_q0v͘XБmk\M׉oPG anaB7L |F|:7g8Sl#ԘdɵH;Ȧ(hd_23IZclvU]V09ԛ<^i"7-WZcpue7s@q J=i`ZtiI}B s\& M#V3"Ӹܢs@ YՋƸ~n*J[ GmFQmAfha244BpPaA. 7ܪܲ( DbY|XI=,e[YXɹ;aX뻮\d  pm؞ v]'e2ۊ?>غoQumi4o4ngl~~ %q(q.])8bEcpڱ$9E RDyST+2d#g}ɔ_KuerE+?q]́de]#AtOl4,C7:f9aWDL-UNA.Y.s[$\ze| Hnu\S 5XhCzŹ2ҐTV]&|85Tшm;&3{y/o]V_ZfZ=T| [cUj+WTV)૜{W\|ʂfp6  w.m#I*Bp.p sLy$y_)˔-%%,%ovUݟe1q^?[om#P] A^<.Ww?(ƶT$7tnWY‹bԴ^g#|y27*)KۥTr[*Vt5t*jF>'|4#w8Vo@ٸso?ğo_|׿Ë~}x_/? uO`Q/&% ݲru銺գ/xN//88:#-icCF͟~ux>JX<{ [ěC -~цnKUROw˕N7;!fl fMcTrt'[m^OhSL1OSCHc)>fT3Nk1x^"NcGGnHU>Dz 7 sJw}Z7fyo4/8 0^rاqfmjRK=?ICv(Vy"<7$sP))8 P狫=qm``}b-0v l#,rov6X-K0wR`>z(;̀oMSiL`tPkuF:Lz'0>q_aeG%J@"*rEgq9D&Nc(dGȎZ$)AZ3 Z`wH`V6νK3U{uZs)vP"1JTI $B#0Ne̤q Ѹ'7ٔkA#jvXܷ;D}"_{H E4EqHXJb:JrZ:Z ] CԄ 'q,gG3veiomwowZGZ}p bw7wV[^Z9q뇲rVQ;Pc]6 MJˣBktlJV#njT(" gr` ݼ{gJsP0iU4i),z! }f-c ,kgEx 5in:X; 7U^o5(#|3v'MdZ k߿ⱣVs>=<00d9X3w/Y>5/ΚNJ!k%Gvs$Z9 ZƩIjݱϧL2S"I63ɔ7ܰwkBiL`~:߶7L ›l"zs}2mţX<Ǜ{ȩ{<7L'tcOx:L(AqnikkcLbsLeyPfTpjmŠR/"VzBVz2BtP}Uzp5@\YmM-[: u @wQq7•3 IW WXAWVIw\Jm+oДRA0NW(Wz*BFW'+cflϖWrqZ·v*]BxC[.4#+<6BSB\ W*[MǺj*޲JX͌ p/Ti JZ'&++Pk;@v*C@u\^pƄ~V%zvreǸjVwv*{v'-p%I=!\`,\\ :w\JӳW>Į v @ P}T2jHK|Vʓ9A'y]1ožhV^ѦR9!GF9׃uCƣALպNjQ!z4ǝ-W X:IW W;FWJw\fFfppeƓ @9X UJp5@\Y}n!X0&$ܮk;3k\ʾm+p(rBpG) *KW=کt6j0R\92BP=Yd Uzpdp%wlzy$cuVl%WVjhV*M߬+W2Mϥ|6 P^x*³dq*e6z\ q @.z%I;]9Tꀫ! SփӧZ 束>NTpj;P7WĕRRiJ tA+TEq* xBK2Buje+T\qe  AǺBL *w\J-+gfQȀr- ?jxqV•RNW(*J!kI ?wu(\\EƺBWҲ'+cӫ= ,x+:کUloORWmznRPdpr+P;Pe#.j8F* wdh'W *BFWʀJ<% sMD|/\Z{+TLqeq.jǓzpSOZu^8 t j*DbϏsiղҩвsLxK?_#Zƚ_V]ۺoqGi|H1^2fbƱ5?\%묆>S @:etRU&_N7~]Ͼ,ˊU9"=)ݳ{rC1jUJ<3.Ry,D82 ˽.9|{I~9߫p YfUu7 ץ%0ٶKno^Bʮ&rW5X#6w3v oV#|i8](l|:8 J>{*6ѵ:a5~Xhsƭ*ILټ`0sU9ϬB[0:qnURq*Vļ~ -X!zYy 0W[ފj2؜|tz5*# 4h(go_oIbzl偿sz 2z])xRNFlt[x28ٍ}'+9޷g7S/S. [%Ox <"_Z-)\<"_%qo?ru<_s'Ky]Nz>*By9Sysc^$J0,UiL5؉Y.%<<LVʞ[^z; }̄.z>7;Nc]s9gchCm(9u%.+׷Hq,󘤜R W-)n@G%R)B)wq />crBEm%C#Qᐏ5!crFM5dk6n!7ŅMVƀ! D`|qYݶ8`݅4o޴6ӓxe̴\ 5Yjݴ< q8>E<<sXx"Ɨ ba'`#F9PIZ VtsЄ -*n0 ldWPn %.!.%a!(zܡGCP@Ǫ mW{:Y$b%X'!,ȖRP2ILƹ <[1}) ,Q_g-^M,$G^D{(P%/rXhGҵ/ ETQ:B2],.q82Wvm)3ȼYPZzxO4<n9cAWRp<*= >b2P/VSWYrW9iEahȘDvYA(S ôƍcWv`ACjy#l6TY`%9PVx>ʗ {ok&p";7<ii*R-sD簇ש UXs5!ҟ9jKHJ,fת_e{tK={OAUׇ5JªZtAVQQ_*ʘʧQ8: 4+sc9)/l *g#z4Y܊-pvq"[=&1SQidOSs\7tl7?AbهVjx`PcQK򴕜W}͸spJͽV븱2j>Jsè5;|^)́EHﯭAi9NJF]FwEcq}-w#BƟGA< tTǁ MĊ>/7NA-ch{ 7<Ї%7f[G #`/Z QX4]8GA8 jB3xάw$Ig91LA34,DQOCZ0byƸFzaMSVEcMVq 2v6u{q,,a=;B0;p=b1T#o6lP^HJq"ḞĀU_H 8!HD}"0;RqB,oߣ?$:INa6#: BzpO] lN6XPnbP,|<#ɕO=Cgԗbj6PtSXuWhcOћnk˅NI۫1Zw+ |`(ztyKÜ#L>6sh:s 4HåL[EGD::Ah/:Q;Q+|oȶ 2SVP2*Fn(0Z1o:WQ1is_I T%u=ۋ,qbqG0Kc>%8t~QȈmM$?#Ց[Mq֧O\6,dcLs75z35a@ zjj30,@908Qϙ "CPZ~E` "zp>KnS|Z-X~9]p"}:m@/gԣp+I"ORy"mnNm¶w{,ʩBLL^;*bDKqDہLLb'B{36Xf΄O(V -W,xxg;|!]&Ѝ;,E0(-+%8-C@*Ubd~W{ I0F{ Fh hz|so -|˶;H3\i+#˓BXchA97&]0}6F%H1iDSLC眽2>A#3s gQ  H,rwkzmY9 &ݪV,Ծjgq!,0cձB@s[1s@ #K8QP݅JvzoͿv8q-tŅ88ihKdsvP.ъ _q d:,h@kKC`aO-_ Ͻ47Na:_f#E6:[T!lqDv` ꦨݽс٧ϑF3s$￑.7c ~v[o_m<8Y j R{yv`뭵pvv4Kkϖѳ"ʐ%Q/dGKO/ool_>DOЎ_uTf2;/ٷѡ.i"$sYd~W+w.O(h#+Bo2ay\ҿ{uu.RI:Bԙ@[gfTߌ⇘'zF &u B:OiE Nx]xڊ?POn=4T&ڠhm(KL@i3s4s3b)LQUb/뿎ˑN6vMp}wa@}AI U='j'鹎(w\3hZ/_-9.a/)emny Lm$/+f:}Qg+8Fo,D6hga|ZGbz_ b}Gowַ\.})Ah[zVӃ> W+(~3[^ԃKX?i2e+mާ)0#E##%d/-? sEVK"| 5VRY纯T-QX:==AjyL}!8}y~>h}L mwYE='7+2ejih ,,َuSг'q4 @]Y%s 9Ph jH ǮmVɁ rhnYB0nߟ9wjz: VQ۵]<8zv~<\fA،]0pq:.C7]~%iǖ yZC7`/A![mɲlJ-&A o|D(_.R }0^UiW8Qo.ؿp_* ^FDC\"6U(H/f\?E){g)ىџ{eDM='U?…~ zy1 Q\A!;Uj,w}smyl6 ʻ[gm@_чDksPY{.2sP\ՏW/#I< =z{ӿg%q%M4M-\*Sg՟.\lvaiScbp۶9TmW_o^vz^B[9b]'IV+O.!IwC ӯ 0]{Xng].w%]o[*_ Y#ofa1_qe6bKTFﭠFfn{[nXُ.3#+w(wP^+{'*zƟ]E3:| j?VZbsH H!!"arHצ-NPdz(ABOyi@$ ,3)F>K%*()#ц.ۣ j9%}#C WHA,,FNeq X;}1AYEǧ_<|;I> z)9 pل> ׇƄQgVltJ%"R&VЊ"0B?)]qS=hG< q(Dl67!\יm{x}!JF!?`6LՖYO~*r%,FL F,2}=Kj:s>pgGyP&x?=~vܞ4D5n?5ВIUȌ+jd7yXzg6e|h:G?p!6@sO.7\j|fclx)|$$|pJ/L1c~1\HIO1s{kW=*l HТ#4M>m5oS ZyI,8 ;p>|ֹүZy:;}EMٜ)[;J2gTZԅoʳE͹ɚ^%:yΚ>CjH9ArWE8$.mF[Ra> F#ɉ Yӧ͚^a:?n阺SK#2!5aeԸX+_Щ=s=Ԟ O!Z\ޫ"@ F/pJÃ&,GGъNןÇ tK>0J4#@ $Af2 T[9 -c1kQ\yx LqY,Cj@@XLjAesV%W[N ZGK!y3jg>OLj9+j$>à~ ?3D #X{0?m }}~^$]V0bKb'|FbHLP=- }izgQ9 j )XP(FC :jg2'["Q*MX2Guԧ F 6jMD C=q,WKM)0qZFe)A@kY+}.aN q,_|v3Npuan_d]J#{Gf]t7l"ϖӭM';\_,˺&xN+`ʘ.h8!I4h)ؐ@+:/YF,H)SRɜRafXTqR#8"`+qd1cIP)VUaXa$/Ys3kGJDn"SN ,685@,8V:$JkU Dpm]&g!'M:0dQG,=<$5U+uh$G`lfH2RV(kz%7^UTg0%eR c)?<(s]W"ɯ)WMuY&e*ɧ2?0 kUJ05.udKCN.#}9tvW_P&R΁B3La U yp2<# (q$hPwlb',JVKFƫvYH?&X LqWO(nyn$'4!BS̜2/* YYE5aDzPfa~w'[UFkM-+j H⾽'`=U)4_o{1mVtIm|trjJR*/wTbaa,vXĀ|7Ą(CP: L 6= /apQBƒEiC錀NFI' kS25xbGsPEw9lVJHB"sA}& 0Rmbpm)ʶhf@BA3鹚Ct͘#JNf{p"( (4u` 4 j˺UoB]k.V6lv>_)p??t/iZ8rz`ոm5|JЋFϱ He/SpJUQpũiS~U紪 *a=Vx}v0 <f%E;Ec7',@恪v'i<9c*@#FO;p83^&D1濹SY ˥u01WVsUhM,B %U_ͨ47a*$8?}~܅ t?@[Sw#٥ hj'ga>ӒT i68fX-cKוY7.q, {4]wh2EUToVPwolL@t]?VʷߔAO޽`9jkww2{o~P-]Eփd?g7 ܒ/t9T߇)jYz-%t7Oitl{@ਚ=Yx>U}Y"]WBS&c(tzLfjT uzW%VYLg5ݘiZ8Փx֣[۹b!`p^ D,8/o~=DTivӇ7#pUvO27ӘG8ӍAc,6ʌ ;6`#*$;A 0AtA{̉z֢WDVi4 M2%T\)gerz?ier\aֺ9Z 幔V幔V.|.KiRZ_#XJh߳FȎ=;gIv4v00]c8vۤzZ_Ca+Y~6X!偅)^A6)lFR}nlGlez7HAO r˳q瞓2(:o;cƌL"^ˈiDk45[!-&ogŰ.9܂'g IM ר]QQ8@ҒPNAroDAǭI/`ɰ87y +;/b>mMvΕ5?bʽyB})n㕏S\-&x#32x5sf"pa/v.C殎ܵQ*e63om7k PO,yN|4 j-$ TH'Kґ0хT}a){FY1 )fe  D9g-0!\0Gd%%X310BB23uS/N Y$53Gc}X2sZi䴓0Ų[s[YOO!2}}1qy67RT=Clr7u\0sOoG}ąDͱ F/pJÃ&l?ؗHjv'=|H1!&;CL6,ǔlF'Srfӂx4s0ērLFavj3KTBqXt=@K"` .,O!2d}HX'n' 2A0ɵ r 01qV{tHbaJ3 '!ŝ j~&s=nҟϋ|2tXRX=m X#I?[@̬YPQST2 B ESv!,=!XOB`DDB]T"L]F8p:d,5fqz vgN4+h-X-v}{ɳ?LCZ.eYZ4wp.\XV81',atAcǙG,I4h)ؐ@+ 7 un4H蔊2'̰4F%pD0AP8@2Avrc$( :!leKbVڅSK .E hJZi,?2vh:ԅ$:b &"X^h$G`lfBH2RV(kz%7^UTg0%eRgZx8_.`~0=,X!bLZr3|5%6,3ꪯN,huj5bja7Wf%">M%0;N94I< Qy>d\`&4\k['O:yw՚3Yݓ'$aPN9y('n2I~d2-RB>݂;ߏ\ɥa*:هp"q }1 g+ {PVzO'a׻fڊtѵq%lY9D1>]BZr~W]yX~io.ZgX1Sg o]v%PQp9cpAMkKBnn鼭 Z1 G % 猞,7t*#7:d[m}L%K'%ds4,!>Cy+W%`jXRoxLyYܮB _{շo^o_w9})e7N߾y p`Z*?i#;l`Y釅֭YO/$SJ_S˜IUV R2~yGUI5NKk W?>"Vu4ڛ7͍޴"z&+rK_oo8 1]|,7H_k':f+p7ʹE`3maYxlL& I H2h^Mξ.?v$`"wFƂl88("|,ѩё&S)n 3,"gZd =8T3qEI^^Q|[m#ĕ-,RgW"VwFkߥ>٩fm;;\o'MjNɊsƔw q0!nky8ٿb zb7~_{abJbnt\#hv3bh4=>K;fn;fQ?w<G{ݮ Q:HQ8Nafhr!Xܭιǩ|j_9ޅx ?訡u>j}njՍm0*4alf^(qvv_6|8ȟvvxKV){-YM<-=!ʊafqO"zaIKMY@Ǝmn⡰^;e;@{FgFv|lb18yleUd1(|s}D;^p WV͏!5}tQa15FKB9qA+w^ i~+h.1 ,3q=8 O/4Y#/+oeǥhN[yx}۟|yo\ ᄑscE9 iލJr>Mi\S렬R0u:r_o}b4󓧳ýPąrMrO>3{k U=z@qY-G[޲9(ʹ!nnPk彷L[qJ Զ;:i)J$6HRVB 򬃼:QK'U=f @tG2~z=@_9&OcD;|C*s올e)3Ek@.<.'ߚvgΩ5}"RޥH^gmmTe202^J۫ܶ界M9!Ɍև׳,ф.6 >UQ( Ua5sS.⤪~/ ]6sa8}ؒо ֺ}XuŲ&ՙ ϩ0 kZiUiZ!=gt|U~ƓU iW%]u V$q>%|uu`y>2L?5$e Icev](۳ < 2yv a.曏컡e Yu d3ܲ7oaFLkg&Sf:0ɇdz\gӞ"/׭ܻ#>@:ƛ}|75qQtkq&Uk9 4KۡzrT65=W`D8kzoxls}!lWYG°يO/O;`̰Ϊd)U:$+3f&0Z3& : z)?j/=X<|ٯri2Jcy$?:$IS:8MKXy Nc?t46auųm-: tr9!|* AoY]5Zq #w)-L!Gw䰥uCˆ3ˍ1k77lk ]WwC]!VgnRMUӛ}(GjRVnX1=p-3)K*ofխ{eOBOj~n@mWf\2-1,qy0p L-I[8)NW4(qT-ctiyo>&^yI\Pi4Jkb+õW&zmw('q,.pN\$mL2Y0HE$*m9Ǝ6fSɀd3 E*(t$B Ř3Km¢c.ORg'JfZJ5H(3xCmJ}  KHLyeYLR n-䒵rH&K9Kg\, DȲL JdDKdk1'!ZbAd0f4cfo,A;GZkw aDSݞi<~-]QC#軂ڐY\NR"0ǨC Cwty ZC@cVIbF`vB+Y@0xCVA HIKSi4vlw 9zzHIe66 D KJf+ ̪@5I> UJz$hh̭!Y)(lw` ۸/K.aMZi+$(`R4f$r/$Aˤbbjʔ ZE+OTQީ] <nCH&&E\G(uCczb+ 蔳`B#DgK4n[h$(#N^z6b h.ö<");GeMZ+yj9ʸOH&,S10@ ʃJU,DcJ}`4dDs+nUmb"֐SmPX+0,[XG-X৏>T^ @ṡ 0Jʬ51{OaLfL0R ]@P+@pٱڈ{m9t$6 qeTZ*$ "2f7!Vm.>2ƒ>D{ M߂TD% 9Vb)7wyHrI1wQ62)!q20[i.bFb1gGT l!ƦPr2 D O t&Qi/+*٭˰Lf_*J &M;$&F6Xx5-: BJag/uTx8qY{WkML꒜CNQ@IR*{X0V5K^89i"J\(κ$r,d$QvM`e&i Vd6Jyk23n0.˚ .z%ĖYƉAEa&@;L'JA'X?+MI ;`&.rRvΗ3>}*\ŀY%ϡ 6& ÓO (z]-JD%Vx H8b*-``-Y(;ۉΗXr{w0>ɄM2 H S,m'VhQz{ v87@<@ ˠmu¥W;ES V<,dN$&x1,!!Io'Ym{t+ǣg;;;qR. +!VZa̘0;KkN8 aV/7 t˞g] WF+Sғ] mm.-g =寮9#iJZ6#cN Xfܹ3DzP{q;.֥-ƌ0bQ"IV;\9.INDx#B2#ȇ ocCuVj$$ҵؤȏCGx'z/rw7>2"9Jlk<`aXDVjJcu=\35Bt"Ǎ=Dk(Mb/o?0jgѝbi5m4 3%V!)uM-SE!.e4U  -#)4AmNX%1%*ȵAhObao<( t\fT26m\29@ @kSn^َ-X314XmP JlliƑ7QG=nM{2-;%sie Hf,ot=?X<^m4Sir7LJ$/U9/a#K!fJez!n;+rTS!u]VvF {B6G=%zV( ZO!+ '[n,IjȓN 9;WczE4fN@ν;C]gh[D-p$x8Dn<"7= $r`u+㟘Ñqw~Qv0KK舩?_k_ 0J"=p|?"p,4l!OcnAc>W^_&!kW{Mѵ'yEoˤ, T?Igd_pQrb(_L7]c_+ky{S3thyo}gԍ]ֽ^,xGĻ;Y,.orIoި*<󟗗qN5Gz=3NϹμ T~LF2ŢԈL62lk۶d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6d#L6roFhHLm>& X҆·fL_#E۴wLsmܫI?D+P).I#0p3ǖo/z \́\7竸5(jBU M!3tU8QA_S2(au6c9mlYY9L1;GJLD\ vtzr1}=yNd]OS(I ϾyӰt諱y̋|zO~1b`@K1y5JGuʵsr HcBh=r/] ?ڐdl;?1#pR7Dm'e +7ilxi*lxm LuiΈokX%&b!4D3;,IPa>CC聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉聉=W'M {O)LK0 ԡSu@0P: au@0P: au@0P: au@0P: au@0P: au@0P: P>mys;.cWԁ[r]0P: au@0P: au@0P: au@0P: au@0P: au@0P: au P0:^>kos: Jmԧ/+ձx>1P@"84 ڦPpE$mw\I \աoWv=<~Y=˭~ߩW$k'y}Ytiw7p?D#=_lꬹ:᤮e~A; 2-J \~IFW<oP {v 4Lb$Ŏ"T:G~$^LB g:e"+]6Jק!]헎dجtG&x)1-;wR`{vo=+ IF,Gߌ?޾J2/O pY=DR4i_+al$~&f`hGz4#~4\*Nl_zV17%IK}J*V eJ !=q[?]QxlW+b RyiLWg.a y{N$#f?vv-{a`6'!>Φ@}+L"ٕy]f}GN<jHwDj|ݓ$;@lC^o"ùvo E:v_{=w^mnX`Nտw?%Պ}3hdd=Jjj{oS'0>h# %;?kPS<~L%?BMڸ #Wzp܅֑oeգvji^lM\h5j =פGmD8 T0ާJ" eɔ+8haxS ),yV"+Jֆ0d/üglpL8*+ei(,D%ԑ -%4CF^[ ? ΡЀFQ!vF3ڊHx(O RvKŌe_ƎPo+UD($fefsg%@< s9z$#yQ<] :{Fʑ+45qov̮ aTKHkգ"T1ݓIk#j*d= ؃;͉f1 f0K<,AkXv~+ <.A8^ WW?鍇ѺiW1'('3_/uudun[uw2>x#׫RD׽E gZDk;/cD^ CU^{Î8?\P;U.m_*3˶M4C_{mU&ثF7_uevLc2G_8~-cN, Sϧ AsEۖvfKߴ+Gi #}_޳ԝVy!Љ[F)C?|p4T:-̉oF0(h'V!E%. GJ_[˗^fni_1XTӫjnqXmݢgu1حH[{$7yӠx3٫Vlၘ>ZW(Q8V>ߢ_tc *Kjш0 01]*xS=߇QD. ga=izV%&FU2mIvXq৉vHXVh-~T}ǼTg<޺V9֧ѭ=:+T,swH |4F&Uh*FgFs($*0sw9I\Ng;u6~XB!2<~Fo׭_T-+~H àunq?wovJ;Ϸ}>^{.x'?ab_vNN=ʦVǥb[iѵ=J.\|ҿm.XlӜ^ϳ/AZ.YoX ~1Vx럿hL8b/TMWͨgV禠%.t-y]/ 3C;]n9~8,6q`Igb$%$g=/}ѽ%jVd*~U,Q/>weA}x;ؽMP?z)1G/>xXo`psP&<Ӿ_jr~u2 g|s+&&&&v?)', J<#s8#i6cҸ6cGcnͅxTq+;G61cq ml9A)xvͅm\ضǣlFf;Bٲ>]a} _ogeK$ø4fE3&!8Gʋ^j0BWDz>v6Ml齹70q~]p͋&} Ջ1^>e/__5?OL7^W՝QږRSmГ +) x}My̸mXI(u*Lr<0dvYMʊ<}U'Wd9+{1 Z45Q;buibF\+jc M rUܶz誌#p14 ˭CqA,Is<΋)=جq<&̻EtEmsӅ8@ǵQ-c:RjfHIC4!u1y̩)22lg_~O}:Cz6Yv [T-[? j FɊC0_٣ԼYZhgVcJ֜ҧ4 ߘ4p Ay re;/C(Zэc!:ehamE7rOtO]HnMsgg^H\{WiۆwmT޵$+X1Kc >ot퀢G-dmײhl.m۫ $k1")K7\)OYtIYfz#Zq)q񸩼ԌW|xkθr#=U_ґ-šc ]TZ"*6༏qŊ>3r'MGl -,$AQ[ qyf1cPN5Jī)frT{k [l`iյL}M#TfYb?SM<#uXO?dc9Lcϰr]^;+ 77a- ݏMFܻGr˳O]6vU+7, o7]||Xf=̽6Op P?\%~磝]PcI~>gr3+p@4 clw5O9FK8L0KF~+Y@ (M,qJý&>9%M3$5;rQC9\K,Gy\X1l lĖ#1(?L #03gqCM5SII*Q(FC "NB0gPDԶaSܵO7. xT;VIR 1戀(ϖxFJSj%$ `lAcMc j#zt<[X$!1!(L1jve{zµDTjR-'&gc>_@mq98 ,]_f*ib,I`9Z+_n"q^?9h7ӻ9,*U2e KX" q":P2 &H@BcCHO::0uf8s:=@0!,K 0 m̶7E;*۞I}Z+uL#{^Uԗi/FG VIg0&۠4Yֿ5}*6ȳ!AsLҍxkp,ewVr-!B5iߴz2S"`QKlxp;{jIʖ_mU3,;4, 2@ ?n͘nE簒ղUV&C:.{_Hl7- AFtXS>9=W*J畵k[yuUy}ǛOWMLT Xu#0.nVU$|ܜ%&<3unusݷn=teZe[0ѧ`/r66aNS5,m1, gDʦb^lI.Wv4S! 0@_x&6F,O+mlN% E*ʰ,d`ެ)0K?+ý(uH)E\5 [_!FH )vBa Ze "[@ s4)5a?Z'DhSF;J |kBVVQ sdDc!Qb=nBճ 5[n=1RO8Ffn9;Y]k1nEqJTm?tg&;qRsaRK01jY8F@9pkʍ?>>.n':2A$x& &q:FGFtTIFюoE4acaU4+D^i8h/ʥ3{I*Xߢ}"X r =M eDLt[JpHFΆƜN6`C(YqsPJ4*,@e$U!UQWiW b+ENXch(ೲj÷.lU 5um6< 嘍v?~D%tyS(0EBT XcI,Ǔ5y>93uxڣtup席d?ZϮd:`k;0$ [ $uP^7*.chL\1'Q;8/Ulz"\Oũw77~gg6rF^Vp^L jv&J9&69l.k7clֶ߼-nRsj^w|V1߬DQ!תyZj?y|RҞ%/ukM|=+Y۴X!s܅vۨCIuz7ɰQs5]AφܯA D8k&&Ǭke QQnD=%i~f'Ox&wgG`z{8 P#e<=؝ -mcv?f?';V Uqsu]߁z+*>L7k6c>9Q{Di͝M1gBRg]U]I='?+v4%Y=#rRRK`FU gw>Hyo枧Ü'!3*2 mMRk5pݛ;XvOSޛK%1~xοayhڝ,z!'[ٺQgx/v9?+6Kxltfu FJM;5Nc`M7Zx`)3k2:XmP OԀL,T8zQ4~w 2gV@ctͤ&(oƾ6s 'INqWFhYd?Oo^߅vmxdpѝ[Pvûߎ\8޼ڎZvKmFRLS:L V;ٛl!gl<_wO^ɋà Ã,,IQueJ(:kTW>x|4(GQO4 e4olm䕿,|+('|ȯoFw0׍/oo&˽W:en;^C{?{Wq $2R8Fl? @Ч]Ijw|=9$g8#iFҬƀgGuUׯ&'v}9D˫1(;A;sda\wA l$'4!Db5c`QT.х5pRěs<],ĨG/qV5Jk qy`b v-Vj5bc223߬nb5Jmsmb)[ޢZn/;1JV+$mF:W M皸}Ӌ&mS.V>$i O-KKj бՏb}7T8/:DžǷgyݾ\8$5ҵi[nyZ޽b_{- 0oxzSs]ξp ٹ`{4UзYLxmEv<׍/ԽWF^Lڣyɧg)D&N4F*a ~s|LABbp Zu1!|pL;h$NjňQr )q`ʌZЪ`Y ՠV`LܪܼoY|ؐs>N_ø<+^ d)z^-}WqOzBy{fHj00PF~c*tz#4 ߁H%IM*ьP" E9ek@v1!j@Hk -e@Xl 1^0+i@!Q ;@Q` CzpQk.ogřO^1ȍb}հ Y*5  "R鰹\MJ}8`;.=H0:!JOAyk!||Ani5b+Ɗlx^G9lm. c_lPIce;j )s dDzQY EQK*}= a'[pDXɈx=xr %{B2 /2 DtD:,NxI%|l"ƸNr<ǰfĖRRHyɫfpG,wX]*$[fhIE"%(#sZtnw] a+)%MJ<с%ޞ4At]ErR?ݑح4^t"d=M'K Vpt>ݼ.LCK8Na: 4aVs邏kؠٷr> ;bZ5Tcl9#zU 2Ȱv"k_rtcњa|R^Hl<7i$0GeZ#⾀7hF,Gr"#"ᢋ= Q~E)hA! 05(xɁ3 LÐC°kZ`u\uR%2(c_I`(Pz! I[mq1ȱ8NJCb6 qZ6J(i}Jڠ% \~uE *9xfe\B)#bHY-g]6BuO6uEs#%%7)Z7,c*Kc*+GuYrWjTxʆi6ڇҞ_ 4/@klG{@źqi/遣RJ]̨ imoxa Ay> e5$tOza=J?{6\N712ŎD/_f5]n-w0䥀-EΩ|ZI=@hH mRzĿ{칣MBT0 /pT`5mմ3vQǠ>zW}A^p==PfWEAۃ^җ5lq14YmӃ&}1xm?mzuކ% G]w}20zDTŬ[ZHDǤC)dvjP1zmjZi"9}Hk @&jxDcAolD=f߆͔P=B9F8LU3&$tXKK 2=c+HR Gts1}u$/I{],;6}}~EQ3<{\Wd+׮h5H s?e/ \r"6}Ҹ  8XȦSI_x|**{ͫ'ߩf>]/V0ED>Y Gaa[&lad%L|o8[ s )t D, :^apxv\;u2׿ds³BZ p瓩ݼ^O~>J&E STtP,TtR\67:PC?}?7<7zN»:/Hۂgj;j)(UWc;q4šm9K .sedW/%7˗}dj1?,\FԐ J+m*G/H;hN:(E}杼,:"?[e`az{O۞]Þ-7͡cEMuKUy?]VsZ`ΫǴn1v_da\wʍ6 (iBaF'Dkbz CkNn_0GӅssz$J&kk @J!*JR͸FhʌOƺI{vAӦVY,o[KYc=o`/Mqfčmh:^,7iz:%i`'%KrM Q0Gv]x8Ivv.ZjyVi[nyZ޽OrZ,NW*:g7f1Ӎ;mڶMZ;_yr.L.g\m S{|.< C0EL" 5K"kJc"10JތI"_cp[l?[2S%rc64/LzOl8jiVd6q9Bb`0Y-;VZDN{\ q@Kr GcuJ\Bo%~ᴴ|^a◰S}_*e;إG25IPԡq;;;>jn8rYMP&S2'ELaOY:'H<4``0#`*d.@yݬb<@3IVg*X>tˬA99.'{. Q IhA$© a%%Y 2T`ܼt\d ZX۰?SSZU~ w+v}|/EaX/qz}0^\'iEϦpx PV5< Ɂ19 m#A@( tg<5͙Zݵݞ7a>s3ϊNYH idF&\p&@HˋEnÇR燅d>OrN*Hr*ʾ)Kwׯ7G p.$əiVhxj!H)Nm~ *JCޜ'OjLudOK|s?x[g l!V_s?^^erGے(ǛMzJͷĝ@]O\DhuOuݠn PYAnH@HŨmAtsbr0+Z+{-&Yk\1dGu}&̟x8wԯt{UyE$`sC5X`Z1IaP 00Cv""'o]l 5TG&₡k(4C&>5s$9<˩"F N(>aDu*v5EvC>OBCt?LP  aLc6rxJuRzw5NX_07^'H+ ]. d0,Cq2$َ8!K;`˲{OS"! eBx0iRm/3s9b0WffP0m.+2EA>9w cm2wGIMCվoM A~TоDk;66>AHZRcО!g[\T`l1wlDp7 +%+t$ zF#Id/q!+w0~1]?rL乶0Е \V\[*Ig྾Z/-Q9Vo{WGN GmpEC{שWjGpzyy!y`tx"m/M( bf}q0\ ]-t*3{C!MqOTh3FC<+L5b+Df#&5Pӳ-LJ+Pai1mG< ][h{=m?􈏬Ml>ZO*BIO/Ȇ 2n7\T9Ir2mvJ2o2,;Ugچe?N1PJ~tٙ,z14"}y.xJ;|F,;!(j$N6hʸˑ h9j=ԦrT̂ZQY.P&kRLGe|TuSI 5HP&$Hq,`dd0H+;%Y-yLRIGTx6@0e4^'ոm«ɓfJ+H*}jtcxbPq#N_J4"qA?k4~7#U4 'yZ=$k~QT1m55\HܥIVt$!-19y\9Dg:gc1.X Ux]U #kM[ L| "KF|q^R;<N}&' j6#B a/d! @f,B0(2$k.TT5pj)E2e+CBd[BϞSA|v騍JG-y.?Aڥ(&jϳם'7C Y9VY-!~G1_&?lN5~1C"Jw@rUΞ$E\U0z[U6JIjZ5{xqtWJA|d\X,"oҨ" 4 p3oWTŬEK"G$VV J_5T,e\5/@wQmxqm(@%Oip2}nݎ]񰹚aϕU\`a{Gnr 1keSֺTv< xUMdXFWrƏoex^QGq++MU9/H;@_}\{Y]Ck2 Oňo܍3 Ii]ݤƾ|,Wqxk<@;u{b ^'q];/=+ سQW\gk&$tu%p^ 5O+j^ٚi;P<GfiF;|]P_K} !xC#vSCIl`P̫ƻZܶmc4|7u挙#Sj>F&'ġXxF-}O63ke D0Mat+0f4:vGYl5w$kYy"t]02D2B /> WD"h$FRnv> ^iѼ#v-RWu:8`$ދK*rkTW`m qF]`nԮ{JRIJtՕ|{[`B[$6u%]7\^]I*Q9g썹)XC˓YĘ&ct:q*;Г8رB!B$6K1 9B-h6YT xx̲M,,QR_Xv#ċB/ r E{zw%"uV[ $azq;MsSwJ\]h8O| o tiOloay9qqYs-C^Ι\6NR^Ne;}⥼˞T^J?r2u4zhyddמ38|uEB1X%~@*Ct(^}rVMAžƭi7 I! k7I/,Ba0 ~P;jUpq~F0J.TL*ҽMuGː=xRݩc*C}U1&c}\Ȉma [%UD{_ܻn7C#'b˛.bM네?U N8ˤfPʃ򱵩3C.H8ʹb cwmDhWѦQ48]>b@HX&5BA0ˢ">iC"< ,3N< !v}Nyx;\Z3JB @Zhe2ZkZFk-Zhe2Z뚣J Xk9~);AsCC.`"!6a!n.rvϡµIhy",``Gz0*EKZr;ҾV5,0S7t:w˰@oNѹ]37@o/.jC.8>p&Xgo] ߩ'Ie|3)&ʕ6×?7a*(l(֠ہt+Ao1tB2@^(Vi;dȵ.$u;ZCI$U)Rmx01%pdbJ%61ZwT' (5zV@P!sƖ@M\֓WL/|کLKS!BgJm`X) dc=' -091!qK=C9M`KطʷhZ2 e(3:nj{~}9is+[gNܵ{#L_֊BfؗjTӂ`{3ghBKc Q1f]37LIgP B l$'o wB`bqr4t Sͻؘ㮓j><~(zjRPa (;uٖÕTAIRk*~ Ew/ 3}Td,³vrZ_Hָ'eQNz8aj6K ͤ ?h<"SVH\&BH*edFu;g5%-^5> vn ~ Fp*rg~w{3$l&rиY)}"(#'{ L/65*4B\wFC8Y: fF&Eq <2^{\<)]~{o`}u|hݴ|9ڟ KBRW% 1tZ;)؅ꑗa$s<{%Q-իD'qi\.ImjIgAmYMCdd!%6^Fj0^\xj/r;7H_eW:J;jĔeKBwC:$S!U!=LMD|=I+IPO'Av(PqMѢMhs016)ͦ4Si-,t7#%X>qQ.,|Ar˂B0YՙB-{مW/K؏BEᑁ`쐭 ,KGԮ;X%)#Ȟ]go4Ą!G~Mz ]O@(2JM&50"XwV%6%ĶXH+MeG\q&Z$tCxD0`¡V]VV[l{id3[Z.Yj-W oHLDXQZŰ'$W f=vy7ù%[ R}WK83V,MwL6sգW]QA NΆK%Źv{ !<4>ؖ@o@(lnމ/Xԅ8L7B<=g  =e0هy< A,'}͵FЉ̡N^,T_ t1c݀Њ0fҌ@y<0w,n3;#1ߑa)Xشk "m*:A]#vV5I:SSAKVR99wh9kJlY-Z9+psR%8\iۑȉ96Tt| ],g{HW!$&Ev 1~J\S$CR}CHJc5]S*X-\VU^֮7Nk?]4fx}sr<Ԫm||":AΦ\10mqsfE̹GeiʤޞVsԄo`" %ZW>RL@SNq飶ěU`̙ KQt5J޳-Zs9!ԕR'hT(SրOhd@aD%u yI"R:"5` ցN˾ \HLe@YGa9AQN 2Gd1mH؆N^i@ QzHLdu4(&9bkCjuI!%%B1?k(jDyһUKwG8R8L0f񠍳`Qs"Q3-9˜R 3ۇ] ^ C#liE,QGY;FٕU!툇 O , Bvmtģ▱ٵ$[<<6Y̘C:j@@0bQG"`"RSFDD b FрG!eLDmkN*jS7gUݖSr6^VK9x\b'uurIͳ Ⱥ35R?O ,uWM-vwSVfSwE2;xfYwZ#WGN(Y(N&6xyU1հtSՆItQRmO 4L"[ҶmqIF]l:r¬[Ni[sw+6-vZoq/uBgw0gHGcb7&T;(*IZlwl8jýiIs)buf$'ث5T{;6=@i&/RyvҒNDN:XU\Cc.j΁PԴq88i7 ed=5a%Z3OO:LS6#s(5² bVBy˵ބ0}:#P(X.H2&(δU64]VSۅzw/VIZ}}'0-]HT s8BӭXiwu߭󙹛ݔ1M#H;^Oϵ4t"h@QAsglrD`p'!xs>fƙw ʋ2"d=$U6lh3H E\qxFeF"2jtI2P0Ev3;`;0wլ$Y ,+=;P.]2e KX" ]qC OY)i$"R !q$x.dnp7ĸf{?{ HitL*"Q* J3K`@'uby5c}O#]`LG#Lk @ Swapk,a^(85iayRi*Ԑ+(aB*w,8G3aKkp|!)g fm[_ys9_._ʰHJ5OQ+ s<IP!arR90Ne^Ѽ ]kq}~㻻My28B skqп).9Ն?w7f8AeՍT:Ge -3D|L4h8&z۟x9dzɺQޕS:M; 40hRwSXb/9/uT oۏI??|w~{_>\aՇ߁,TŞL¯ 2jNW|j)wC}ywU3݂z R?$C#@?0mQˢ?UyE'LƄ-ƫZVߺd&6_fܿK_C<a+61rlc촑UNv%8b1d04-`F~|ls¯6%AF=iR;ɟhgb|FH ;v!0ETBj`V2FPBH,tFs{G0E?<;M1shgN8η,de,S h,$+0%nMD aZPHk<A"isv-\VI,{TnE!N7ek˿VSxPNP[0`5g^xRte4&1qꢣ#:$@7UD$cXЪs6h"( G Z99qx8z1829o;=|o6yL?w}SMF8-VWpozF:+{\\ʟ~^/P8 &i6z~W,wEo/׃;?ç=n2'Rj©?HifaZ`oZx sZdD)1lf%"Lwf% Ao8L>'9ױW?ʛC9&tx" #"( -XJHH!]gجmV]9 *`@/7a؛ACn\zO'3xv ŷpeXc$ڷE3B+MLݲWcj?'.ڢ!7BÇݎT \anZUt*#0'U9&AliyYEHNgZr9AfxZByfS0pji x5*Cz Z{1[,} *'$1teJjw_|=fI*vģ`'Y(AN~BLqXt-@S,`3,'1bePyJLLryå <3BJ:9+j$AOAQ:SS5V'&AmdVI XJ( WBau|5m3NhFE>^ոA*|p$C Lf7RTTxq8\^]u$:k'LՃ9v\6d[-NnNRnd =!"=AZ3=RQS"! %K9 Q Ƒ6&)N@?C)0w,3tW$I-W#({)R53 Z.\e;"`vy<\; # V',;byyU1հtSՆItQRmO`u89-i6ָ$e.FY[Ni[sw+Ʈ8\BQTٗ*;i:BX^SKvu˥'ϥmיeTw*LP J;6=i&ea骋NPiIY'"@'z,*gҡ}[,jkwݎᄷ%I W ORքh[;A:x yk07o,w!C)ւ?[Vބ0}:#(X.H2&(δU64]VSwA姐:1[i=btKSx2EѵЬY]ma[3s7r 7V5z=>=oӘGF1ZleF͝QFG2%"viC.fD5h4u5Ζ8ioW6Fgn7Wo eO6}<`Ax~CNd:1kHpy\Lg{l2R\G eZ30{-#cRњVLs-k>OWS# 1dS% ӊgjv6W7pO|Js-pt]j]ƳONޘ5"wVw&겱l'52ோKu۶XU;uVn1nY(QlnD nwMFϋm.nW_uwW x֖fUGGt\1 '́7:;ǚWMT l־MgcKv`go7FQ=3 l%rw-LmͭI{r0c*`\;77q JD$oϮ$ X ci.\RG:p) y-WZ40XsZ2]9a \$ ;eɆ#\dg&7$C.g}[ӳjXa$⚧9nQMo 9G?a8!$P*&?fWixuV\e553o烓ၝIp6 =&|nubu|RKN@:烼Ή錙BzjnOt^7dn7XIa9Emp0q1Nbe`a%J^G9yJ g,gHꘃ\j+f)o[dFXѨ  ZtZg' :;lw^߇͟w4MLTsN6: KŲ摠؝IhN ƪK[:-79k)WKK/vFNNL?KK{ R+ I;>[~"kg)R oɖk qg7EY]Cb˻ԧ*2yN_>; Q!nŅdz6V1ayX#=V%i̖TcPd GF`39/1]d~V;GJQ&ϟi'{K q s SXD%6jA/Nk|%4"M=ttdd kO`qB9e3^J'T[ j K4q8Tkl*' 5oϝq릁dJ`IىȵrXHⲽ[m*j8#d8KO}5:ۜٛ3CI!'f^:e@%͎~5pqfG!$E4VFx6vZg4_3KK\!/aNQ֘Af%5g쭶InxyôfARO6(OPp<0ȹYȰ, Ƹ#a òR.k%'RHDH Y~G'2>Ϡڐ.u}: lץg>#f];Ydauޫ/ȐhvK61sOU.і\p7 rT7>Tn&#HbԽ`+<{1h]jk UcКFD§ڠ̷Ewe<:S Pn3Z*5amjw./>}h'ͫz `U9g,'Ω"6y{6ix ]J? ڴ Lqe8?H* Ś6 HxejV0bR9k@N:e<:8[cR;ĬW逰 OjRz\Tk(b#O .G''f=`@ràr!W?G, 7`nX?ZGNW1޾\lSH 4+v09W a`p B)Nji˰w]x׍F'Utp W%qq0k܃|@(*`XRs*6{@ I`B!ʘ8P=8k$( bZ+ Bn(pw;3qǑ]˞l~I,ےr*Lb!u9$ωcQWZrTrأ種0QgÑd=HEc"K ynjzJV>ttt~""_Et2'+w4/Rx<2圊"MX"IU$~G/o1Չ~ֺaZ'*cux8J"Bb pDDtܿT ı(&=v{he#kOHlHࡌ$w| HƆqc$rqDRs B 4BaV+DH|țu. 2-;$-ZlhghΆ%4hn?*OԭzuҖ픴lsv5vgTK_"f-S?70[ũsߛFN:G&seW&i__օr2Ivaej.j|}:~eoMX E1#L B-@tdXj r l:v+==˺;ߜ?9=WdO0-$,ds}gYYד~z>>}=σ0h̏>M Nyl*2F,.c?\ZgkEbϨN ՙYdx^ati2|V&+4{,R=7tAciXs8ˮ~'=|rxy:I+qcr )E;y~n=Hq|?YkRu^d.?[;BvI1cx߫P,8 IWE,̱i-z/_}0kFˑaO~ׯ.y=mRcneYPdA[BO~×GL9Nx}rd#u!Ǫ.?=JzLòBҦ#Q_;Iw( P/Dij' b/^zpd.W0a|nE_܁$e~?N~gg_d\'WŒ>^xZ u*1W޾瓬[E =ir];ȇL^Lv3 |M6dYjʐBb%F"fx%5 5lѯFv =1,Zz/o{*E~;8a L`w{ ze̶oG*fNd&";CWyb%҇/E}a/pڶ@wo(i{z@jWO7. }@bXW;uő㔯O2Н7,rv1ߕL1SW0DJ%Py<]s~.FFaq1z9,pxZ/TŻ/5,)p1fyU:q#3mY*o}(+ U\4I)Yҩ)eMw*b2L;{I],<&& :,U'O~5|2b"z_',D9/>8ξk0uR"+8`:2#iV~:V>4>Pi {eaWwCR>BX\#U'ϋ1,ͅ8]^lYTweBV~gl/[ժzi5B.9u%ϲuQvqlFo =AGͫNJ=V+6A](G6M;C9Ysc{jv 5!9҉>~G4ȯ] <6ĖPCk 4 [e+;~ZON쏅un KBt\,QƑP "xRB"rƘHbloo܀v <m? }ס_  ejՒL6|g nw9ʝT+tґCuXcflI)[iOX diej&ր4sC#r$B(=<ܤnnC6_1l $2&qr X;Țiaq]Ӭ^V> :-әiܨ35?#E 0 7FKJH!!" 汅j( IKDWuJ9x0^{#Da{o2D:0cΡq ꂏ[X>z6mrP;WͲ-[]϶]qZbws,䁩nP[ i!rblS,ia4h )FDX'SCt(]E~n߀m ?NћMoRUj^Ax4Lg# [(HUh#0%#c)`" *cY:`@oeوBIA38Ԭ0iE1EuB>RH@B֢92d3&"rKcuL$Tj(aJ98HE űh/ivLI{v6'jTVQDW~~y϶wJc|{GZ'] ;OJMK&J&_/l4g- 0ݒi1=iF<"Ȏx3~0٥@yy-uzcAPG#ESukikaD,֌pi9g1A^'sۤ'gcs韇K +v=kDb,aT8tHKCGmCp Ϡ꫇T1~Yp3kyEHbid1tСV2E Kp\ܪed>yWԼ(FhtSEvmU&S)TǛL>_*r!:=8]9JjG<;oOn/ XSJ0S$X FXc "N  dP.ѭG}dPnsp2+mv'QzZfIm=kgءc9LߦC.妗)7w^T)l{/ rGg o}pZb^\5cZ(rOb)_]e{l^0{?\{ m{^(X(^&xy0֫ M6neRC@Я-vLֻptޙY[ InC m8٪.tAQsdٰ0«#P s.Qܻk[\ns 0"l3G&^ބmYvr̵ѻ oѣ?'l<͆Js\ uqk*Ռ' tң}Wu~tY(OE񄷣Ig O#C Kg?xA{iWV` ~f|È#T:tQ9Q"P0(rdHmƟ@Z%g]Dr'Bzw_92onf}z|Z-]K r#i֞.If۽w,82z9^m}0oa~(rjmM˻qVeϵa:o|=_r !A8:ߘDr-#k Ra-v3"a#-4``ً>Х 11t=DhLbpK &1 Ed[9t P[dA7xS_ tEFS1OzSPL [Ebbb3HHqC;0 8J 1cKOXR2kT= 7M]])&[H{hUsdY!#+HdHhkc6V[ RԖNR_EMY?14%d=R>v?*yD< ț`ř \ߩDdp  &`p;45rt3HgN ^̈ų_FqU\B0R_>&r)&-P͂[E&y!oy5@xr)bʞetv=9BAG>@ Y>5>F65͝;oP* T{1rf'd^gW/ଁ"Z|&v4EՒ,+T;29mD摮ۆA}BufyQd(~%'rŘl [GE`$Fm+FY,iwL)#~ȱW]Sl֧pqհNxCyg$ʱ_~z/?_xo틋o!bo_߼pV/nfFuЛc/ȉtGc<0 m˲ _n7iWCʡ1#ۇYl˸QSnq1U:S!a@[\:]|,}>}aZlNck fzeXn `|m۶1%9Dd!X4` G*F"z5O(E&bJ9?!%2sܪJ&Nvm(rh؊ީH Lƀ8BcH\ NJ`8_J!30'>CHj*߄fmطn60O,Fna9[ .U$Ug߭Uq5_t;V&B5`8Tq( `t{IB]9-RVft5qG$i?ZtHHKm\X D I"ilmM6}Ӣ$*ԑ<6$rb'vV- &)<$/ϥnuT;VGUd9DOQXvbUF\+FU}6d5JKQk ư|оa8 o8{{"5l~{DBqG8:)p @Z ṮdX!fi֨k[&-[Qoh|>GYjS3yI1[ǥgҵMlOyF!<*t)~N{:J҆Sp0By0"KR4t)e>A>`Xqp'\AmEk][onu ֟ }̩WmOt5B>䄂'^I җP VE+/z%EB~UهQF~AoIҫ 䀏u_OUQWː|=9zZ>-ڱ^ %,Э d.r'r N .|_4JC@! ֿj-)C].iWQ}q!Zƣ/|ӣ__;:>A؜CQ7E~I~ Lt_9| {*o~*(%҃?~^|Iz%+U}S3ojX=)>?|]1gO5FX ׍хZх|8*S.ď/ -g3N)tId)V/sz5PхqNEy[c:Dh|n~7={Ղ;FaOHi2TgW6AU܁5{en@` ȊoEMB_iu$/%x{=HJ< ' G!h#)AWӭWjh:$.a wbch>B]M'$0@ZU뒜^ɷ:WI} o/(oYnR+_Ma7{rž_ x Ăͧof5GYנFJ&o]\ wn߯ Fæ9d_$cF3 E*^QRdU HUb/wmso޷[kӳALhRB%D]6y4홀1pnGh [vOIn0:\r, ΠR_<8sO(j;:j_ݹyr|ͧO`T~2CF=[ WT+lJdE=Lyz8] T}PeߋXf3|S rN@z8(uSjPNYPRm``fǂSޟ~ *ˤ=|L*$Iwìp GKNHfR,/0LBxV!183^G3=YUZ8v9|0[{ y. U*kWwGx5T<ov{=Ϫ<9˷މoۖ"2B)a3_Xh# ( < WQJF\'\Z)U/' bq^/c(TA#@ FeaPnqu `mjo\:dbw:2֔M6."-ray \9-OJؔ(X-An5η[IUl&[M̳4@ ,R.{e#&=!&ޯB*ϵ_Pۂ*\qU~#+fx5ibV`֘GZF9 Dw2ԩ=XOvBm wuT\6wlq*,]8ܼmǯAg}[[;2T٣!NX#;h䉝~1H*V,{X%Kr %;_)3HZFXpIxT d <#S!!xB̠vʯ j.S׬8 `s'4 o!Vm@wjKqۯ~}sec\ ԑ:l#ĈrHP%MNaB&HvWd]%Z@$᷽óֹ Oa?;8<5I߫]iͽ:h[eCEy&X|.n]3=lU=0=C7-S؁>F@zwaO&$a-#3hTJ#ƎYЗ>@;Aё e<䑓!f Ŵ9j}e1]*z$1mỺ.@eÚsuk"{Ѽro54bz8twaK1G B$R1*aT윥Rib&HL1&V9̗xG 9IkƑ!4QTI."$XRei>0>¹iۥ@hVXo|*a.CT+`ܩoo&[=Ě>ɪ,('|]xNp`Bf S3tV Sшq%(sX$,bI90I`9Z Or@-ml4=f4'IL ՉK8:0b-xPa%0e$P[E (ozƘ4nBg.cnHL$Z~7qjuYUD3ATwX(ݑ~2Нj.մ #% Xd #pxq1I)%X%81RTZBiǫL*NDַAϟӈ "r`AנYgG$IVp}# Pjc0ygwlȥS$8?S|<lx$%g2sNCwgGEDxw$`!we#? vm_ս"+)PB!*i#TU 2B@H`S=SȤ``?^}'P`QꯐDh7)DEIc"dzV0d٩]QF C^hfJCœA<$׽.Rx<Vg+S`q:p-3/'lC9"oh5Ƒhzap0(i<F4v2 05y6ԈgcGxOGQI64Wc:Cim#~l}ד zEAV/ɱ$Y)xWvہN@޾x_7?goO~zǠuZn^כ|& W'S[y:yRV>ygFԏ@GyPWgnB.3דƫ&?4ZuE9q^x6_}d h O:w61 σþ*Ign.-nkZV%o+8ZQFnt(CgӭɵŹWeǘ`߁J[2B ׅDξŎW2R~{eoZnκHAHAq?_[fez R d|XG68}KQ;zJ ;`V ĄHZB%"֒KEq% ƱfD96xi08j3Պ];Kz5Q5>7 @Ecz㹙Xtn7fs3/-o&7bPƷ,7_ 7w[zйӓq/X=x7b7. UEl`/KLjYm.m,VC𹈂uhނ~On{d3T`Fa~fG`/=~bm0H"[d,)ǘM4 M8{?7^PԠ!3J@ъzx0.,Vhf!\zͩ2uP6-xd,^{j)XOlMӂ*yY% 57B3k8R!IlPJ#$L8$;0u0Tc9:cX}}8MyTS{8o~yq\}>/ށ.LPH2_a }?fg|Y?n٩+0um^z-µw}m+5Rz{?wDIG5^Ѩ<tإOIh{OGr X$?0{Z2E[#%{usu6O0c3T3m7)ϙ<+r<-&qaḏtvxntpJAa}k 6 l?< {p@+?MGW`.Nγ' |],|YtT/|yٽ`U//?i(: Z /;GEplGyʢ/BΓL4ꑊ sJh0t{alM9aUڗul1;o&\eCyPxh=x~V,,XBCk8KW3hT նpq˝U$]"uV~m#,}xo\cǽ.>C~:NeЯF'О~(pًt09mXOw(Pl:S5H~y:̳\ EWeat{²<$0_ y_v?>S_f h۷[}1(;*wm z~ NI`W[YJ\TDɢ.6%`ll5U_UWW*sof4JY~4s\W5~=4W់.Jynurdov&\H^5Ļ6?W͠}͠}#V׃uʨl'X҂5<0b0XЕ!qNv"f0{?O~{RK,/}kbB{u*5?wbL=FHkܛm ne!Qo#$*/p`ꑺ5QW\*KWW@%'4&G*,QoU"W+@0tu%+-F'XDOf󡛽^\wCa^|WO !LId)KT -c1k^\z[ &c(>` gM1Beµt2Q+R[0EV6;˜|қe7ś}YeS"` .,2'et~˧6{ɱj E\hY"M_|Z4@0F'rEe-D@٢y Ot'BhuʼnD E]D\*G%z9JHV}4썺J uPIEBz9JRJ JKuս9 2.]]%*vVW/G])*x koU"DXDTW ՞^=a<_>]9=_-pk Da1߼bf4hS'\~>s\~>s\~>s\~>hIs\9+!5f''Wz9$05Wz騒/NR0w!V` }(C5*Cz Y@SJx->cEXoWm0[|L 1̈8êX’]c-\!ǜz29U<_ieNdB:Up;oB!1qV{tHbaJ3 z" j/%/vӾ&}4z TJ=-dKb5A+|Fb[FB*9|8*Xr tcFLa/uD¹p& i$-+40A[8rj4FGFi:ʍJ?<t~ z; z˫MrWv}]jbv;^\4nh޽Ek-Asb;lmzܽ_s<gzy/Q`,M,Yydc1=RRA(Ŝ(aHr, y7Q? ,u$I-vw"[1?0ry9rY,[$3 f :Kh\~fBqfX9L<]XvRԮa'?ȎV.t\.F1K}Y]pgMMn;M݊ VJv6=4p͉Izle)񶸨#C/hrֈ?\J9(85|19|{9aC,7L_,O !G*j ڲ )`D;=dLK邜"]px>G1rȢ-5PxJ" (L`K`3QoF 6j)۞!qXQb&y:a*5(K)µ5zqy֊!N|6^l\ÔՅPe3(V:`qY^6wۻQ}Zz4.\YvɜV 81',a$tAcǙG 1A_|*^ gE֋iQ>rZL|ѢG@fC˻rs\j֗*Dxh /wI9(妍Fzd-b〡$eI.6`(N"2dLI<{7v2.͉ 2 m-{y.`ZrTW;S\+'4;,0xkXoޯRv5t}wy9%<v0Z`(3*h2:ڀmT 5 m!Ӆ,mUfYd2mA{o]tb]^6F6Q}~]*:0򈠎)!' M-B$LcF^6#\A*yy| )8LHR6e 2LwZ D佖Әr Fs+%3 -4Xj!˪΅&~xkpO2HYbYӲτ%}Oxדyg]dANҪ*TLx?{52࿋WH`ERjau 惺&{}q(dMc[GCJ9ō8`1waʽ3ø)az.aXqYCƤm$0h޽ӆ*iY:'w罌2޽*ڄBnsN6`B-.kWp~=@U* B /lPqx U{2b$}ZJa^iGjdrVk/;^sXeGGE&4mg<,ӡ?vtm|t{aE}-O3O`0c%jƀrפ $l'uDh"maLf;1[Gݍ+cŌƊ|mc+!^QPE9?Ѵ[_ʟ䯆zo{Pc]C !^()\8rAR8j'QeU>GT>_Tun I.GVݜIf\ǂF &-M0l(! x&XSr%o1Lv.9۟;"-VU:ɦvlĉ*=+.O!,"Jj Ep3ڇlcWYp=-;A@A4r <0 QҍnlߣeaiPȮբ3^tҭgko]PKI^Ajvɐia~9N`.0 ,- T)0(-VJ\|ܺe2ua\u\өJ$P2zEP.G qp̑Q[M*Ffd j0DNKYt(]k$)^D}uw3WnAaF.TPE# R5 j-޵qdٿB d'qV;3~IzHxTmTbLK"ݧsUf"Sg[!LU/书REn{xTe? mHIP(.ɔHrQkx!\M "0ȮrcQy'`KŐ:[R%R19*\XZ]\JVRR193?zʴS2cfڨ3#$ŀkbvudkUW◤7#n:{w7ޝv?»1[5"j ]upX hpNW ]!]iJYhe5jh]5#UCkҝ}>F2h'FDWLf<`<jh7@i銍c1&j ]5BW ?xJ+͉,K&="j ]5^֊oJyRWHWIJ]5,GCW T;ЪçR]!]yvRFPtJf骡]}>tE[΄V=7vt\'xQPki]zi59]!?jpZwtP)|+eY:]2r4tZ ]JeOtutbW KU i<` #@׷ٷWݳ={ Ń7i O9NYW9EӒSʼnh7^u3dDghyg7Y{:ܾv2/TW] eREmԓS/}&:v[cax;t񧇜q[(AswAzZ-{6-ꔒhVes*'32XKS.ީ3&2~c\36W ܷ5RϷ|&U]bA-ʡnq֡Q狇vߒp>R\QS"O54MΩ`W̴<>%%1LLD닺mAosnm뤏n[nml=_orV_컽^%vU}ޕ_: K;cQ4tئw";ϖFM][ld{w*;ð#*=u^üM q'zj}wC}ʬE&FxoBu9\Sy8ѹ$J`na&X^o!0oכa>ޝRaFN/nsN/֑>Jf*=[+;X6Fy#ھQ=r4qG&\#61*19@iChW{ Be!oͮ'gEL~,\wonm 7(}oLaN#jxWXš7a u&0TcCW '][Qtz{tx芝5TNYgWfVwCI'|̈ Un,t>tj(O%]9/S3^gWV) ]!]Z3؍hٱzytP}φ̖7g{I7*LW;&zj OKWij7@WDWƴҌ`ɱ:!J-OtutakGDWl ]5cݡUCDWGHW$cj`gFCW 7c+eJ%NtutebL`lh]T|tP?KC`;fun,tcvxj(Չt w9tjpYZwsW@iޮ7̤DWx p ]r]}t5w vdq̗/.m.|uMN~{yQ.chW8YݟJ{_: ݛ?|~Y !^wHׯW q~/ׯ0y@#XVeP1 _} muV$Ȱ^wp۞-98 ;6ޅ79潔^bS.~zoe­^F2fEܓh& BSټ06T'Ql*ȋ 꽇Yz0_{f3J3]ȶ_ms 7e˛t=}Cgb yїUh2ktFf%GOE,2jMe$'4-l߿]BRx4,Ŷ1g_-ofG*_R`ULBxY9{*F dEhLAMx%Hfa殽r^IEJPY) 1fVU,Z9}(h02Rl6~!-U)g螪U^l-&PE6JC%'9 zђwR(G<$Z[J(pI_Jt Hado5Pc:xFŽhaZ״k +HX՘lR KYUSYLM{$BK#C.[C2$3XbhƬ&mtLHR$5lDD0'3d-Z1K3hTm[_hRѤ2Cdd6C~eb*p4ku&_m 3$~%\-<ҷ=G@xꭕ3^xړ/r{}R.$iM,-GDyK '.?SkP AJҊ+!RmҒ((%) o$WA 5ՑFI䥩n\7)GYGW(V R}h=ZK7CX %$ZB_"'֦p.x+l!zR[VAb)iG"d I*Ҏe U43hQ$8^ԲRȪPB@"тW Kn{0L!DBiuXg=)K6h;]ㆼs`IRK"MBPGA]J7cB^ @oB%l *@jT5m0M"K45뜴7]q' B֨ 3K756JUΪ_eVEYu-k[f&= "!|^gtcPcbH56ZҘ;0 _POa[e~ 2#aQ`=+NF1sJl,2FԐcY# (ΤX0S D2AUSEc w-c!\ n| jo:k Be_A(u5d w PX߫̌p[V2CjC ^=A{!־?)qC8xgY2(Hp `e(1xVR pK 8\4jUgC1IUVv("K1At̂&lAF Œ}xp9 4 k]!ީq1yv]ꅅЯzz]pUG!RiK1ZT,C:amT~[uL *%" _Ab g#]Wh2d%hu y٪t9_?,_fei\_pAf!nKj!~~ n5*2F(EGr[1 n P|Oa2%^LF+y.V/8XUq1G?OKb-θ/=&g8濣6L/sD\Xpٔ]K O'?[bă!S  ;>HR:C" )E|(CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">iwR@\;p= PȇrsF|(CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">N諞_h><$>𡼴L=cNřPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">E|Cq ȇBq`·Bi8v>JN%`CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">E|(CPć">uZ|/ en%Xfu8ǖdP)kZ_ʧTJ^ ;#0,[+3ksJ8+;o}kgJ]]`+Bi9vwRJMݕn+ ]v0x=9TI+\'>Z&VgSPe_ƒZ~MPT;YW?oHiqvY(]Ri?U-I1Fy.r5'0᫏.TNGꤴw&ٳ/ ,Oً v>K's75ߍƳmcreŶޟqH_ )ƪ*YժJΉGڣYjO| 7+׬RJo*r]!q69 VVmeXĐ*($TR*e4 ~0 kP2ֳch@J(9FA`dž*=wҚ/F)StWi?$tPܯPZ'] rW'讐hpU+Bi' +pCJqW(LJk+r+0CBW(+A =+cwW(䮾we%F ~;~gvWgڻȘ$ʒzstrW cv0 n( uJ-tW/ŀܕ01?w ;w*w 4 + ^+W+B)StW2-lqJ>D)5#wuYϾ+B YI=F?Ӌъ04M?4$}«$4gkRo!V=1gazN02z`kyu{xnz̭۬;uWz#\wM+"pܺ_x]ƿj`0Q/~˗}>nCs-ì:Ϛ^>˦ bku8St|}/t>}@;/>_n]fねcQV{W؅նGe }VM5תؠqydΛluGq9 _ q.i j:K7mr2ٻ^-u^Nr $O/J3Zu?ɈF D_ccM}lP[~6wcdsON۾J%V]ן-Vv PJnrͧ1fuՍu~/\a:|_4=~sX^l9imՍao6{fXWc#iKbՍ%қUJ-TesiF5.pĄ[=Bb}M|/z\X,rxT;e@ky>e%6vxwϹbgx8K>}w(4n C5BmD rv7AUp+H4&ӷoWv=!0>&Ňuds-nsĦ>ίmP-Ղxb+~xu"dzul9Ym)u{/}0~~ٜ--F2$:N%tdZfc0jwߴl2nU-MYR3S{pB)Ia4c"kUr7Ia7@v ~s4m>,5ױIW["Zi>! <l7Y$$|>ɬB; +|30u]WaCmB+ vB|hۇC^_Ej%ycr;[\uŲڽV'xbpEX$Zo.ky󚧛k3>HWqTy@Aq YUeJk!-6Sd,Oa,"^\h7m4Ͽu0ypgRH8bTv`޵ZyTq1ipSj_Φ;hݾ_EAn`kϑߺ+pxqh?s]e;9C[v`{:4b `&i[g0UkT%6k?n%mUvVqmky݌,p>Z]UL:1J(?V*T"R];^)͝:¼ pn)'wmz3p7MP7Ds"[;}ol f0Ԥ}VsR*(*Vß\\T|{}m]=n3Ʈ GHC>j#DdkMjN%!,;j5Z#7vw">5qA>OԱ [wo0~Lgf@9WSstnJgI{8+I>$g\[esOsm5]ٴlQ?FBZX`PwAG[s6uh,r6w|nnsc}+}_>B1 ;-Xhm 4wFm6*GT KIwi62Qx8ƺ`Ż uRm[tfY T!onOߍF<" <|5u6g dmuJ'HI>OO;Oa! B*Caj3jX$"﵌FM̝ͭhPYLǟ@d5eS$}}[Y&̝q&wPV(J"Sّى߼hZ9,;NMr FivȌ`;y{59i*ϛkSyٺ$͊eBNMv wz^6n:&ʭ4owW2},ꎎgjHDUo˵@Mnߤ4KǏםގ^WNߨ DSq@D aRT$RQST29 B Ev!YyQ՘m:d@=hQfkd}O!a#ٕA Swf&pc&S`7gٻo%P0 0+:mqktP3]Ab. #2aQL1oqh|Cx)̠}>4DӛyQFR*yrg8W k|T- zuzk@ǯ~/~~Wo.1Q/©@`Z$w>b/'j U\-on?~;/{<[Lvm&Y-XSЗz8N<pSp&uQ;OwDCSv-PAqyK_Rŗu7`! d uvJFű\1Hv]fKT>1c(`ib0|k:2K@~V;GJQ&;0i+uDj9Cq),RP @N6BGbSms¼ x5!BS̜2/* YYE5KwDz ]Bs uT[Éϐx﫨/rы,2` SۯSpƟ 9_mxBiSk9CJ(>VH(rJ[RuF4# N:?^`jS$3TU"bK(F[Z !BfBzÅ քFcƥ]`%U1,pT:tPJPu~(.6IV*:%ʇ/"e "ggD|K(R!2ɐ"J͡˲Zѥ4ÒE*L<i GDpC-s$#m`y c9qlOS@;EJjϚ蕺_&+`"5-xif43*4Q{+O?a[B +$b NҜ{ljuna6Vr *Қ# >O4O8)Y{\ YkkY6ʴ7J'_S:'êh3YaxQbM>i`ۊR+(YdLIں,h޵%2C:423hhuZRa.A+DbO;?BdoI;SC dwn.Z Ա[k:ٲF-eK@4VXIqs3y# A_}lgoAŮDݹV|}۠mx p$Ս7;f `2?LhiݷҘo:غ6޷#=}dzOjh~UzFmv`رa]نՌ6NJܸ\βL QgoQ6hf{"])u̯ XOCqD=N4c~QTgԿ &FIEu01 9SVjM,B %Neo0OZqƠyq,ZV+@zXx>*zY{&Ni=5zWά680$5I|9N`.0 (- T)Jkk^豊W4˯TZ7\ R飶ěU`̙@ruՐV#Oo jcaAɧ'/~8HyU4dIxN0+D{A0EJGt`ZD"rRtώtDa iU30 fGb9AQNYR d#BtƴAaw9u4јV!!KFb"$S, A9,a#ֱ6h>QZGiN@'@MMհ=zQ;ꖖWUr绻_0v!Pho`u[\٢\v9WF~Nix`,(3xn23Ńt' feN7L+9[1T)4a#NY@SJx->c8<0:}͟t$^%\m8^ Jj]Ig7+TM<\nW37Ԋ^')$b_ 0˴I$ d_}PHؤg`ge^Óm=׃G?&'zr;FmY| 4%)u13(,'^U8ř2BdzvQʄ1A!X$i|(p$mJ[:k*Gַ-1x%`wK8uٌ-u.~^Yυb>|opގ7^ً?de$mg)=EQ 14/rR @_sWr7-x6 E, S פzgԦ rJ͙tUDcP[P-qoo:}euum6Vm7%+~Jf_rvnvE}{vzFAHJ>!-Mt<-Vޖf)y=AM!js;ӾSTl{61oG3$ND8@xJ]#30p~cښrɂ+J(B͖J(@2 E%=$___.649_|-t ׾2]4_t|YhuƽayrQG,+jDjU<}$LS2L먢~Oy` ;ڠ`.Kx@Y).f XKQi1f!.-c? $*FK.0ݛOp8ʾW}s1Ëk] gp\4E>kYzxL y(PHE|y.$=WN .1bF-;H_^;b@ : )c'i#╱2D-$6̴! \kƬJQEFN/{U\z0HKƸkX,_eZa^pRz921960@޸zsRu0 '"cFH"q߹[~[QéCsVq/^$c8✦hJKEpAKRrkmq#O+cWY|T2fHЃ]˭=An2%Ώ`]P9LPklizt]W& (75˥`yV}TT3{L΂PBD0rHpEEFTd5~z[$Ca SǓeyH`Da4n,ĒY-10c))H cIB/'2R/66ه=W1E*ʶEDJY {K!DŽ Oc*5(K)µYzqyp!a\Q |Y1m/57V7sK&NZ^˵'K|gn9p(cNX"Ǝ3b`xH9 (DH# @h|A .˔ηP{? \ HitL*"+zraQiƝV"XD0`AP,@2AnX)(nS_f[ƊX7QD(<Vp:΂0I'4ZvBQjXS&$Y =`O0.hMH=Frd V(1J /ɆPC@gڡfTS Bodg9 |OjWGK-DOͥ"r&4ha5s)Cya\8n.LT]b%ebxO`.B; (F"rxO%5Lݡ)pnWa袹6_  ͊N[lB GJqI<]'k:lmN;j[_)l>eISI sP.={iިjXcC\D[s3Pǿ'?yyw'x:y7'߽Yo0a=X_7`/o*g^ZJ?~zK7l13SnܟBOq2W@~P\~mRP|oDMSno!׍l.-c]W.Dh[lkTMˋ1:x N{ %tcp F_@ vsO6ƾy4H)ʝ#S%ؿ8Dj9Cq),RjA/N{b%4"M{ud gHԄM1shgN8ͷ"de, h,$+%ؼ &N1$vϐxﭨK\%:P+ӡ(դ{ʫul7:Ι┟}"J#E,*¬V"XFXA"s@>DPZU)kI;!)dC ZDl h!*:9A 8 q>Țmo0hKxEnj SL?R ^Là2"ʠt5(e\*9(oɱGߠLYF[Br[uӷ/SUT9+Rofdlc g~`2qړmz?㾫φm2\V~g_ohݣ4ÒE*L<i@L#S"!K9 Q Ƒ.ű칶'Yl*Ma/-_2d]n?LC* ludƅlkIO/?De!r  #"ᷠQT*!h8TEO!$xt+,%4 v7h(^0)YwpqBfJL;AiI>s}Ȓ4U.rM3dKKxㅰ/'^ZR(9~\? yuY (Xp^ F^P-QaLl`D5#zy؏</J xcs 2կ!-Tߙ;uŮÚ-WcK'[.b| w̠ޮk˗e :PB*/КXAJT\yVdNfA_{qA{c0].H҂8p STiaҒ&E<ɘZ3GU,i@c֯^Yr׎yA ̂HB9U bS˽)ĹD8)^5厗¡>|~1IɋPuW*SvP2zEG x9N飶ěU`̙?\3sʳumh]תM.Ӗ~˶q}qYUB0QwPE# 7 j-$* %^FHHLBDN.|R@eE.qVq`dpG Qr6$R_d# PkvpP|uPM:3G L(޺F`I 0qc/>Lں>HJcZ u%@$Klp,j&,;U{1|Pi|[`*V4^:oՃ<;pm~Z'z3j?nRi5ڛfvxX־W츬R`]&͋G=r<5R; 25%ZmXh+[kWp~:]}IcF1 )`)$$p͹es]dpP0vY4a#.Y@SJx->cgBx&tYn׿@VӚ_#/,[-ݔSW|0y쿞; :sEEjCاJʭ>F4rU\Nä5l'uDh"aLf(";谻~eXq=VH+G9`̀~-,00^C7' ZV|{_) 59f3P28iRl0sH[]DӶ&2䣌 TCti pyå = ՞6#,RFXp4 B75%9ӊVI[]<-/a@BӸA1&_Td2/5{%NƓ3)^~,(D"g7ȹ$ v==nHn՜*EKEYenB4gY6N c@$JoK 8̝ݮɀZIiPͷHW}+ܖrMel0Zbd-ŮذQhS+ӝ1h+Y= ̄wFtsp(bZ+EPȤ:h0"(UD!(Ns!a ؄ A2 ǨU1kQRk R5\a̛>$%6'?ܯgm4]kpE%qTpZ.!wiUi!V¼ǂFP_'/-hxVeLAS5Hêt=):SKOZ6[W˖bjlL^7[nS @Is(Š .x%"BhR0|%gS4agJURWWt$Sf[,s}.9gTjģ'u7}*]wWu7i5~*]wRZ.u*]*W푻JqWI\A]JR2ՋtWT*zWCht{AWkHQvp4\԰Oqvz2@r$9)4Wo3\0ib >VC30_{aI67z4}Zil SJ z,M_.j3U޿}u`aI QEj m- ͨn E47fL9fLip3 nl77fl7f47fLip3 n47fLip3 n478dLip3 n4/p+Z.n47fLip3g4X2 n47fLipYZ7-eLip3 n47>Go L *EǸ.Ngq10 . @sH[ 2 yiU?vi` IUp;oB!1qV{tHbaJ3 z jոgڙXGdICh[0%f׮ kmm0g$q }O|4sd?~W>xfz21iWe*S]ӻsx6q>7̓wpGp-m|h:Jo4eݛK3xk/=!f/išGIea-K+`Ȼ`(dK Ys Q 0eYO7:'P1GkmHe/I"qMAO6E2$e[^38H ؖ8Ù~TUWUWM(rneZGEdQ0A[R.k%%RHD#AɎAɢ=\,r܇XVѝ;縇sVcuCtBa+A&Gg_:ՂG.HJ&(Nn3x4}63%d8#REZz0,`n!KZ2nۮz4zޕ_N跁1iS H<`c&PfTetۨQ1,%! j@ۢC ]Zj̲FmQ~WO^;ZI1X_Tߚ#˃@&`0j!#:M /1RSgKfɧ1 /y_AW E5y5jdf\߭[RC!4بS<;,Nxpzs3nn#HȍQDyʩ2~gtJZi(L T#{=<$5U+uh$G`l⁐H*} [[rArtF:?kqS mƤ^wNIB4ry)#rEc:3 1 X8IQYާ2ƅRfTsr7\v"蟯jyqyB0YG)N`#G SwjFpe71Z(85iayZTX\AٴΫ$l]zW,R>EjXcC\P2gN\?~ys^}^ߜc_`F`\jb;7d`̖WNMn5-~Cՠ mqiFE-S:i9No⨢3*y3E[5m~@z"ۼ{+_>UyiVxղ<c qgKTُ`(`d }6?_1md *0wHsIM[]`dFiuDj9Cq),RjA/ !<JhE4=tve1x4)fN팗  BwDz7لZ(ġ1;1[Q_ĬөjˡC Y'rIyy]mlIC(|Q(&>e5' 'տOjtK3zscu_UxS{o>[6E{R۫vk-GmmSkc+/A De- ^UkԭLk99NU|Vė 2TpN&ɭzV[pAMf V%;QLѬ_gϫ~6[gڼjQÝwިgWE] }]glb3+>-@M :)';Y~{O=EXr`X: ga[*洌pb%ޔ6\0r1qsCao[Dc@8 t,M JKEpA30/Xo(9 GnT6Y榅Zˎi:O2eL5QQE ֊+QJY!4WL`Eȉb$elE*ӚLs6eVy9"&Қcgk5/:r50}i/j1F~B`DEJNY>CGBo,]1UnD&Ì> ^(Rx6xߩ@cV|-,tKs,P4@d =!"=O)182U[2NL[,-,@x@a8҆0,G.I|6𰟭6T6>Hk>fse+G,+.mlyd齱ygK?-? ?e[B [*$bS >GeG(!H0j'Z7` Q[M*Ffd j0DkYʳvm]H]ש`~s}qKȍՃ_PnyhdOP fh/#HHLBDNjl*iXBg lXL1'(9AQNZv>2G*j( խXu^0MnI`Wqs?&o\xwY @K^Uz . NlHl7# >Ιyq&r 0 G "o,m o;+6I{q#'N~dȗ0^t 6Riq*N!e6URtZ*iiIT҄VbuO&xOx '>n1g#`*Poz7Li*1R|'?8,[M}wZ\S0sЬs w[ԭ-m( Zն4xC s=XSѲ{_U.?yM*V0#*)dFR˄gw1-#wK:E{ʉ:\Hv}_p_eS+%mC> .9J_I$$:BS"|GEJN%xCwu:strt.ިF~r0fڠV`Hu,iddFPI/Ji#/(nFN5D-%zz=<S1G EmJkr"( -J)pc+^w;in6쵍kCof޽c*`rczyۭrR f^N#p(_y4lPGu+L A=Ѕs2pЍ;hAlt^FBV֤\ub3o/xʢ3ƞ%))ANsM돪EyeeMVGjVbb^Eި3xAffn6J@ w`v/ۛHu ,60S$MREVK)GMUSUa#"n+|Z0YC<d!Q}0%kyUیunvt 0$*8ʝ7\ Ø8P=d:FY^1Tk%i j?v j 5IKُ++0֧i0<_z~(@""az$H E}q/LJ_ _p`e>&AmVI XJ(l:uO^ĭ.^$ף/;Sh<$_Eҕ+!m0}:t2^HfVAOώ]: B5{[`ƽѴnΦbhc%vXXJ,Nn:9HE*LL;-wHZniur[pf4QwS*)_Ee -a&{B},oQ6-}.Jk{+[gV <*`@U%Iڬ3hwު2)D 77j;NPiaNht諢\^SFYIw?mq҂o ˧/,\tR\Wqx4-/h w1]yBz%\#M,rqPBl f= -IQ- x^={_(f*fu VPH|Y0\7YKǦX*Aە#s4Lx6x84? _KD)gMrf ZK91LNhͱ{TgT- njT}Ϲ4_h+_r-{59$ 6VQ$VsIu}T@Yn(QӐ0!S&pZFe)A@[/5^#Z.!cmix-K=k|[ 2%,r.h8! I4h)ؐE =ĸffgu#)TD9fEwZI%HD0 AP$@2AJALX:)" ByH9` .쏔ExgfJZiIؿ^#{`]К*: y4#k0EIQ eh>NxZ{K^9n~9Ωn!_3 |Oj3&YgHXs)/yD.4Q|4Iu\TָQȸyLNn\'EXެNO .M"[bLf$9)V` 8"a^HjS3+ۿ9LA ^ΫgJ`pVtb iKSat Uե%. "(?>3 %r>d#S(J@oժϦWV9VI)ԩ۩ Y{iT9-m>\^q [HCs-[ZS:+gM~-u(PojBOcz J6JQE6ڴVJ8tBC-#i`إ'bgg2&yEJS\ot /?}^o}7oa޽廷?| W@ip)z{VuyaLOޤg؛ƀsbB!`:(z(w ;/"*( i®iA&(qA3ˉ~D{ (x:w/t;mNwڴkos(#W0W)FFyG%X+G!(e\1a"'")4 `+xPQkz s%-]W;]GӢϪZn2ɍS+I#3Br]<]$\_еZeVnb{y6*%Qk~l;<.C "_qe̝4{. 4gY/nE񫻔mP31K/;3✡=F-l3=Dh6WHCsKFD3ӏ&֮BqCrvmGdڭ'nfjdfg n^[|!焼!OCͪ ="Ya6T_lXtc.!@@J[hH81Ι_o#CF.y3Yy* v~Nt 9{Ul|Vs3u4D{S(sVA-m=3{9A 8ɭHܝgI]X JǛO4|:PG%M'9Mqv|wY2aP QPJ@HJBZ8#YWU>|7K1>LJEtvn 94r8gC <&\WCcHUY+!>N8{n=MWYӪ<?㞛ܓk+K|~ߝ~rܾ/gMlW 0[,0>̀ +IUŮV``ןWi!,$>4KJ%VlGe_:}?g,JV3˹$1Gsо&%9B+SIͫ4.CNnmo7؂\,_MJ5 γ%(ͫO=dIْ) d3voG䣂[ZbBvjhI%ާC 0MLbgs]VS@@ r_y] &!bM5;庿 p O9 y hoQmmYVd-Q$mrAɟOь?+eENA罽TX)?,|ߺ?*u'o^[:_ ֋&I“WfjI ^#hLrT:IR#m,?DօRϊ^qOFJ"9&pS34`67tBc("+ZGw=]#2:c9L&GDH-. @KGH~AΕ-is~1maHc1F1sɭ` Xe㽛J (20CGF: lpx,9d^ wEP[,} *U?0V;S`nq'x#ETՖ7F4z{HOy+\af>X(9C~e)iP.ac IQN)h挏y:2;˜ _KД{%F}a{bwƊ$}>v3Z8.0?X$BlvqlŀW[;UsY [%Ӳ%/4MU*0[x衱-@̉?7y2P(פ}pK&!8A\j )GxA<' '2AЖ=g7TB=rY퉜U({_.YPށvφmÚ-۱%QTثZyv}S| n6u`7x0|#Q`wpZfy$LjM3Ȝ"b~`\i?C?]fw{x#wp!`$a8{Q (ɤ†x#hTLFjnHp F:"A)Q$gAO!Zꉏ܁(|S'S kߩ,/<*of>~6;!OW̶mgܶsL @5ce_ZsX  BId 6*I$n ZLyѮ#k>Tws^O:uk,qa6P̜uI%wEMP =&Qn"72W5`}/@HÒz ܳHj/ւB{Q^&?+:thTj",(NSV$<(bQ{prŬ"Sy:b.V4h)55JcӋP_X}{%=*٩r!s|uQJ% 4ܸܹqqN#m`2q/}^ 3bmdi2՗=wc\ZRS`VJ O(6L3#S3\ #yV^ $.g.Aσ9Vhg,F !Ųj@aiU)Y=Jmhku{'|'#?u|'i5N}8nISD˹ǀ!\Sn=9k7JW5XV2q2 b<0 P &39Vg(C(C7I{Am8Gj;PhUX\8^WR:d$gggjyph>v:ϸz )HN`$HS>N03"xϛB')QІyƭUA@?5FY#䭎y1!N'B؃Z{UpI˞"z&#$F >E0vK FRHS`xJ&  ,N<{қ]͛6%i@KM(q<݄E"G8( q r@@"lSbIQ|]D9yKA6J3JJ%@έ!1!NDO,hn0K~J+Debmߜ&0GϋWc>ں.@Mv/H)#tݝ#$K["l@z+5Va-c>3KO7~ -*wmWGh4kΙZ\$:oUSyyY_$2E^h:H eHtax=Uכl:{_`֦r"g<6;90kYћ7g7Dr& W1@B.y &(ŨUbT KAKQg4YN`J6EAv1WΟ p=,ɀrwAZb7YGwQ=/5>hIIV<w,Ysdz.ڡW/XE)Y)x:QacWf'CzI6B<dOֶ5.tapU!*wNNtmzhݕDn%+,x6C}g@0iʯgb*1{FԘև{f@mNKΆkMj^옠a>2 ڄ㔷1'T 1 &lG˞-qGNPiXAN z*jfniCӰv@yU(*;Sޞ'=F߀^<-kYoqͲj$ڏ΂s&|y6 |s>}qnAx| "[uE9!=W0sk"9%=LOC "7ήd(xlR#ETrj/si79MfJiT c8*%W NuarJYt8NzJ Pt^;>=GQC 8QJ 1$&@ADXb7ط@xG 4'UE8ႧJF`s^% -*qL6[R:Gaj\M&2,8 CnZMT,ÐN\M?v'mc2q&wQhp`_(j+[U΍\UsXȠ8ZqzD6i:v W2n2}8"YΏb8~5C2z!if foMJ7%D4JiaD>uj\̕DzYl-8{Rp`띚փEG;_p?w~o|w?m.g-0q onszn7E j5˻[;^M(w=I'qB3(bIF Y̖|2}YNt~n8rDΧR˗zj׻2*zJK >|+ r\d9-:UjOd Nj?ނ8_?wwP_޽f`};_=?뵼vu~S/?7V "%V[^O_fsya'0~ڏ kZ iy0!񟈑}=|4%18湢Wxv˴X C< tŅWGqfy#.fp MRpu4/ˍ%v `'}w"Yc#֚ S~nLoJo۩&[LZ*c1̪8H)Z,=wtdZ?x^1JoR^zw>j %,F"5Cbu_,qRkNߴ+'[7VweL6p`Iē\x=q`:?URa̵Tk6ru̷snC>pdD<ߓbR[>L׎(o%νs=x7_9=;S".Ai׍3̕ZR9P"z{`ZJͤ.42s1 4;.ӮcdDo#E?LϏ"`kd4Hc,qjk%N(^2 ږ<}RbV48M&ԕw~ fzc' d0ֿx?׋ohtZQɼxı0wNʌip+N+pj P=˿æbN"Q18X]9C8&QI2kQٛö-tٗWǩqejp"Z-ex_P<o}f4ܘ]7'ygC:wwh.ythjgq* V2jFP>8&aW4;.%綣9G;\n9&Yjka#@7hd"2q[.XK˭4$K}6S\vqÊ+Rn7&m`7g}ݺe/㨽 JelàD݄BRI6`fM^#;Hq{4 *cX4"3ģ>3GЪ>ƸinP"0g)dbc%LVK"SV& tKC\"J/ ;)΅m-O>N? \`CfC_Bc7rwc/" ).&2jL3!#g)rP CTw_!K[=AnқIz$(PtaVVI>{%Kْv;s*юI;x45{ʼn -m' =]@aEZF7n=ٍx J d'0.L#4]FH y\t Hka  '$:{Oy|OC:dJ=>XX)t:%"xbAsX = B(q/5;6IOh#xߋ%|m\? VO:kHl'#X *pn]Cd.oӊ+5VaI__O=~ F6oE!@ti"#p|}Y}TS:oYgvx$p^Mr_wk}=]& ruy,]p]T5VG*YeߧVfm-כԛ'OǸˆ{{ ߹lNe3aE%unHU 8\OG9]sTrO{ܡb|Ŭ{ mЫgV^)X)fb(eyscj鶻WNJuA[jװywpQ?}';mXN7o.EUV-x=sVb}3O 3)9rcx@5<7݃U?^ONZ  52B1ȸo5EThR /mK.tibkTˢ<5ep\^;[IQ~Qr='I 3I._VoɬG% "PN q̘ דEOh Uil-|0s<7@8 @eG̽ԝ V1HTc411 s[ <pF%R 6&d']';-Z6S!va,O*lN7hHS:qL.^jhޅ7=6`8d:kei׳ )CnO o|7З{Ol98{-nŸpڝ!W^rBKֿԡ be89CڨXb[*h>lL=yV-Xy;Ί< EX|p[b0+U/{olFţ-qŠsECnx?˴X߆r р!KV5mycHUre8)g,͋`pF?_5%qG?k+<6"`п6{I,6xE{ hkTFxҋ$Hs*5D*,\@8κC}롆db0ۙEYrXCk$jQU{uPFAgGwETIIlW=w8%"`K jcc `:<c*"Vsb쪂8F @XMF} $:ABsDr)m&V; Dx㉎F&("8J#(AE]zJ2 ǵIrY?*,*O nć@ˮB~MTAu(ԃݠT1s\9׫| h27AIɹGgaZe2y 6F1=A< CL! ݣ#U}hUBNPH"0g)PZ%LVK"SV& tKC\J"J/{cb!'eEC?[wZ69? Q=4 ;Lq5Qe 9sLy =' KX',F:U\9kN5BNЎU&nlTYRlWN*Xs@@IL==A afft~m9s_x<&˦ӛֲ7nAh! 5L_[I7R룪M og- -M5Mo6x~vC2dG eZ30{-#chnDsv,o ɇ\5xԻkǼF"IpTX2I01VDI% G12VRVߴ  E.(? qf7PNroDʵmk p6Ӗ;L&]_oWkrKI%;Z|4ՠiR%Y/G29^O5vYSh7|coU+o @JNp8q(J$Xg# ^p%44jP2zEP=%R:飶ěUH `̙֌Mgf\o<.lM2vՅe].ܫ.(=24MLnVl w;:pOvp E#&xѨN0+D{CJGt`ZD"rR-[S@XBg !00~9AQ6J =rAiL{B #$$:pFBb%#1z "$C0IXk8HHH'5Th.*ѫz @EVDVY$X̲U@>=O2G`@qUxU_2-Q^ Hb""p XZ63A?KnEQ/|/UQ)X%w,{4{U .ō9:9Z/(0G^܇~0[vvSOwӻW-_~h=Cxׁ/G;/wXH&jq!;:KקZZ(X.HJ&(N8t][MY';|Q9^]4h z^Yʬ3`#, n cvZc KAThVE#FyR;mTe6UeX & B) ׃՜3$1ΕDH }%So24[<-0%{xHNL.縥▱,(%c`EX2ti`QA i|G#=3$*8ʝ7\ 1qV{`:FSB4‚SP^F͠v=oJ0IЛkOc&`o+V LkmbNEYb'DE_z±Z=o#$ؙ*2'֕m$2 lpeڧ]hxV\7SԶH-+j7Vq1}p|Fa>zr ._t; ;zKts2N6>V6}, ?!fAed$ndL&CDTЉPDpC%K9 Q ƵB6 egeRm-7K2X` IXMCnYz_2.%:#W' {31&*Y15WNJ7MV[jfsn 4LI۶F7>qrBYӭmP7T_nd ]ǔKX=خYHPokfTo' 7z }^o' `RClY7j&˕ T,8fAxSh"&jsǮZ3t;k*-limX'WnNF9򤬨Q>qx[ }|xZeۓ<s̭ n,)~6pV[ C3O{.јʣ`J*#)Vt][MmN*IApzǜ@Z*~~,9yza==BFJ'ٵuͬsk]8_yMħ}cUogË8yh4BkM̨3hQ9b0XJBڢCTZ2C԰&ٽG+vɇŤDN I]c`0j#:M /n%VdnpvѰ'jάULPB 6:8&c90L0&`K! *Pjp1'T2 G eZ30{-#c&Fs+%m|6-=tҁyT%׾ߺaUB ]׻]W0m>ɵS\ٕ3 jt Ӏ?.&l}4&.'[̥.q5odl\n-'r<$﫹͹Wo![բϺ ׮F)!H2.E[U? oWWT7gPO߳ |Nk3.e'"'R/ED¸M!aMRu95J` /_̨fezJ}?_M ŪQ>Kd}R c,G#L93K(\h J8tuYߛ dE-6`M@ZX4+(M_ti흝&) q>lu>0E/|-9IJUիdRFR*yJ=.y%`קjĪ">G*zp^]U`BsV][o[9+BvfK ؇l;/ ^mŊ)JG$f%Is(UYUPAhKhK `<+ښ|~NЮ Pw6D xmh>g g?+i\6/a05ڣU#8wo*΃m68{%ӏӽG7$4/$PMÅ%¥H-ʐ ,2^"?L6bęЛ\N NOP} nC4abI/~hfu۫o߇[ CcrUkݮxٓrWr ֡m mt2iVƫKY: ;?%+cCwBw\".xK@w''hn[`Cu+H/3'jXb:}95B𛟲K5N:cO)n!Ynr:ʫ7{+nu4q{8d3FDZL]pJ^ԈTd#e[e@ב$8Ϣ)y^J/g8\$?96;n#3&gddUf 5ʁSVrG7ō0HiiBfnNxV?uuAܵX Ľnvv D]B nRISdsK &F@ ?&Z*j8o$CL9&iZ=Z5gVu# 9{hd r,KI0m NEǒN[!-8ed%TP- ,q?e\D2bqߚK1fϛ';zx-mA%%/tKL?QtaQUX쉧7H 2ᐳKd4z#=@ -7fN`ήaEtR*@x4ȏxM @o|?vw>}s4no;w_L?z37,OQzv=z$ e{b*_b];C5cꟿ t$@x-\r{sVT3Zd5g-KwX+6S"k]̤Աq9fn!l,x'qR҉H0;,'IdGkףK_n䎛 [@b&!Bg*%xEn޵-㔜dKN2yv%'gQrR2_fd,ɐq2j׆:@x^+ Y(@hg{}z)79uxسs"lvXZK}@Pf?F79hꭢ&emi9[hp*+ Db‡lU I2Ʃ:bghn2j9A Bu%pKF x|{{,]|颐B{a ov$:KB~:fϣѤcǕAAT# Р^tͱk]8S.8(zxl t#:"V Tjɣ u*aJfnV18-H#jVg&r)爘M8+Ҟʰptf5 ?)[}C7.kw;-uWb y y֏]B`YO,3{]ߎ}m9I呑p덙a/?[O&ֻNth?~lN;vnwM?3e`pfݽṙyt[p3ꞎ;b8;kObiК SMۜҵusOZϴԓ^ܚuX{P9ܻcx ;?%+cs/&h~.zǥxK?%V ?,h~,l|Mj9¬aNOY,WC:cmln! \nr:7'㬸!3jJ7Rr-qɬ2u+y}SԔZ%#)ٚ(b$yQxU<;?s>wvvgg':)xrζ͎Ȍ F:Yb>2l`&YBFg}2@rD~4Pހ)~Vo+.5BZJDKwwr8l;癧l]QtlY* Ae(@fPV?] EM牀7fi`!4  3GE }`42 e9$6cIhGY2_TP-쵧nt0!~,7߹=,yslģhi *.@H@xE^x3{D$bU!>1 g/PWӡW ;wI¬yp:Ճ\Ya~Ks-7)nhrVyOy@SV ;|?(t[WW.a?10@GcM9O8~ <& db"d!{ٝ.pf۳MO-O!.ֻG\fmOǒ]xhl X.[Z dQ[mtkJ(#QѨ4s{>G#""DqIS 6 "B߿/[R;2a's=g0-JB*J}*ir3e#\7W'F'*mr9QP8NX=-cnA^du܌Q2޲WJɼF)~00 /U "o'p>Z嬧հ2CkoBg3{p#Z6@$]Ҧ0qӜqUBS ,#yUhƛ$rjg "ېd6?KV>x`!G\Ha ^F55rIt5;^/v|S"PU{|-īGQP Y4WkG\J})*i^0uYӕ9=-$[(R1 QPRtR498cL˨jkjٯwхVSu MӅGՅ+Mヨ؁-Ab{c7~Owb {RJGsOeVZ*%7kPJmEY9vZ.r*) :GlrHm*kjٯ0.9碵jm[YkۦTo$t^1@7^7a GYFMڹ{T6/zKXBQЪ0$B{Ds$}0TRLdB@N3@m9i$I1%%BQ".K1H !>V,|sB/۔9yp\ᓈɂP3^J<9oUzkE;.EZMպ1u2 r0}_x/@9srr@0re`Zak e"HAEo3J΍—󙯠ǘR$hahe8ZI97 P=WwFYd:c'W Ȥ%s>$GqpuB{ä*MnCw@@r7 j?DI4%` G\h98( 7[KiGE õu= Vm)mg3` vh˶(@b%WF8a9Aɔ"A&%^zH#uj'쯫Åubu`K &!CtR+eB,KQbBΙI")tլ"=\qmnN@Rֆ[~_r '+uӶ  mFbieH*`]%m1PSۯS!3'=BECK"J0!I&%2ƬJ;{w3'd59owl8j=|om]#+[9eV{ΰM{RdHqU|ز9bOVM^ ;>61R/) W!LR Kl rș׆bZʙ_Qex4T}E|lLdTO㒔H/K 2yu v7i4NB":=fMPs.r*ɢ87&`(1%LQGBAA }lD w rD \g|@A+U nPAkLD9;1(b=ms=ӈw4om]ݜ!b<2YYK.s# z' =QM_M1>9c_frH_ʑZ~awS"rl =ܘ-[ ]lKzϛϛY-MY(ޏ'En2+AWK&\s&(<YTpdUJv'y\gӱ)K ". ɻN!r4qZ*muSoG&߆}̅I[.LY|{;J4{*6V{w]׻0zκ+v sPCR'(@Tz`Cf\D6c&-:Z9muo73ǣ`)u/{V%w=c9 mNUIPjˈRћEV[/ףq"lb4єr<Mpϒ_ny@mB:ˋU+pm۷9qqZi#( &RIMɠtwNa@?qc2ԉV0{/whJS=G]wHMvSԒGk 4g׾~#4h 0rD&bV4f~NV^Mj`ގ0-R'?7Y%s{9 y^^frSCֱOS1dlXuMYu@q/91iw%Ԋt Az蔵М?HVgp_ZD: SM 'gGjPAel>Rr"Dω|[i-RG|NhlQ3Q(p ikpi8jBAdoBkG|kߚw-Ú[Ŗ@j[}ZpyH?̟wBr Mf)[ҶC瞓d"c@rN )8YSrU%x Mz[g<ۣПɟG`r[Qa"<$Yob{S܁( `sqҴ(/Nrg0Ŭ *YS^F3+V6NZqQ^v|;7%0:jmss+e|;~S(ůo{I>_{˝2ݝ:-ܴ7l~ZʕFy&{cb1Q/$ sPdNFq*țFFTB=-BOM|NX xKCFy!4BAudl6ݑq;J9K,lfBl GcTlضm?d0<  O J8]@dMz(* Qb&5~ih 7KMC>IBJְdKsbl8#vx|H Rv384jQQG[9[ExMHeȮ#a2A ^%rJeERjt'â1@ F!XtTLGSul#a'I1^!x$Ra쎇x1}5[Dăo"bm(d ,RC@H!"dɕ` Zd7l]u.ӮÎ;ړN[v!jg*.kuQOyk>usm/sw۠+ԤrɁO TLRvM_uz1-'T3KV3{ZWYѡNikTMi0ˑzB`^F0ofKaע긏Ivy'9 A–3@`AXZ(TN`9(_BNđIEHr|t}tpo c-Ð 7GSWF~8jXEPO^gœGd\ѳ`@l=U^Or x7B"r w / Q~ ~d8&"Ă&9MYEUC{<vƳƌ.D6O,٬vgVfwJ2J?i<"+zط.:죇} |F~HWF6O؟D [wS+) TXF%E `{M$ {+XqoNrx%`0:}-O`ha]CB9JhyfGfK:ڱմx%C=Ʋ rǂk% 6 6CmDbkk rCЈ\{Aӥ E%DžъBU%P zmqmNm^N$f$0>[zRl+|"1J=U+n ·pp9BѳVHtr;DʮZk,c -tHq5VTu.KB'͸/mu$埫U>l1d~v$κvꆺ?FWgzz"PhmdAw6hJ-w-X݆$Iu-bmf6e.ӽ_n3{|¬{WXS)^COiIzXvRkPoZR[{ta7 j"U]?ƯyGKr+)>ҭK*JdZ27!w;P`i݃ezդ# κ-3lf eCޗ<v77+˖Íٲe˶gfAz~PeͿx3)<DPY ^t{>qm!tj'WO >Ѹ>P\Уt򛊘KpHfwI.KCk{n-\.M.L};ݩ2̶ LBw&GqFpf}ﻐеks O77O?u8zAv;;+?Ws $/ 3T D`pТ9KV+ĕ(XD^g'^#}Mm)?zuG*t$Yl}v"-Es!6M`MyrPncns-/Xw[%bJp)*TR8Ԙ14(+MU1D6qMEg4hfiv2vG #HPwωs"8ѫ޴Ic``[B'=DgDT <, /w9߫iHj'כm$אUpRG|NhjsHW@(yeXdHk,WE_L2}$kmAы8BW gA>n-'s>eO$6w~32ed`E%U_w~\a48arܖтW8:99d4DײUU;FR/vzPҼ o&*JQ!&fHRxgrtYtѥ,B(P4VJT%r؃ gYG {W_VLJ2[xEO?L{7-Z&_qb'@#TY=ȜATu]O.;b`Qyωs9\|&UHX1(Zg'!l m5YF s THч@PkrU0Ncl}p$v|I`$uSWTlẇOS[ 7!;'uwYlq*a/^֡MЏd WaПX>-TW׫.oW;vۦ՝_.n!Vh=Tn~wwKbeDSx997QdΑuzói~BHnOӠySSPitsZl#kI_]%ώ=a )RA6ȐϤ"h]aEj抳U:‚2D)qWVX.]JJ8Py RZ{_.Ƕg2>糝:\ٓaײtG]koG+}C!ۛpvqr\)1 I.y!)idQd4QSU}%L> 2%,r.h8! I4h)ؐ@+r7Ewky[Ө<4+#R"%1dNF03,*͸* !ʓH&N <+i8 Eȅz~ƺ#%"7F)6a?qtH:֪0SdQG,=<$5U+uh$G`lސkHw3upXNnF!}q敏>r!^GJIC42})#rSI b$ٳXqRd qI̲?)NA$X,zbS(bﳵ`2g)XA0`)" Ij…] fsp0y} fE-6`M@ZX4Z+([t_@LgIB1/x۱׌ɇWĔHE󩮯D''+T\4s*M69NjM_0WK(UjkcKgU2]Kuy}*8P ̅<,hxv^ɖlWEjϗչ)C%Z[b|sKm͐flЪC> $fq1j=[9η +*A[mjX)l)YzI sP%w WRp6^DR)T*@'~* '砎W|~{yOxͻSL_%:߁͆m]P]elZ2-EK/ z8̲`O!^8r3| pQghF ]5 5MSv7-P`vyCniRŗut! 0݀@[^xt/:rZ^1H͸1f %,+Mΰy1؀P)Ӷc"24!BS̜2/*@YYE5K% *F5 K"L'>1؝_E}, PC)McPdHRb\gGT ̩R1 Tj˙S/*s=BƓzH }*~^Ķ6zݧT=u3´RUeܸuCUllun h7X>CR|9&lN2\*?ґZַWde=1NLJs<)}:f-V#^Ja OۯP{9Ŝhg>@W=dockVl.4f֮ښuehm?Ԧδy}IV|r*)xwWKg_o$ .4G`qɜiON!RssΙ(E.-1`eGs$@U+ym9G'RE  R!:8p1Zd2|@>2#be?1|,Qv {a yՂڈ[IG6{~%ây_t}gyoQþݧ~iX0LyB=KYש߸T4@d m6KlJ|0vI:[;jv'53ږu)ðj[ [kƢO+MehZ fŗvA ,t=y bsL *%"cŜ(aHv&@ ss0wߞ-nsmiAd#=J\u[U|lO1OKCizl2{ s--ŮٰzFu7l._ZTz٫i'M:u)k<;ؐɎf.|\*Fx{Crk)ݍrÑ"VW p;"G(\RܥY"^` ^Te~[ט*5yq82W{6UZwJ+uDG_wE_94ώO<'Pox;Jtw0~->j/Ioh͋sӶkm9~Yyzv#te%FƜYorˍP&'Nbe`|؞/ stBuǾ|<1Fc+*J`.29 ^QH#- /iOz8=YV6AT3 e#19 ̈N Sa1*t. ^*m4U aU8Rn+帓FQzu3`L;M!P BP'9gH(%b+ah!%Q$ƞWQGE ,w1d a#HR 1=5uRR-o9oF% f`0gH`,T#k= q.`|>b ^r#&y: ,ג)E 0{J y}I6c#[oY伝 1l `J#6m ̢fTEaۤm>TnYyXr|gC6P<ʥi듙e> ˮ;oۣoŇbhS7{?w`w]F;FqP+zP-g4ZlF՗K=~bb-$k]8yMOAHLgnY Yp$L"R!J0:"ocLc,T^tl'$ H :Dh]y2^ys謱a@@zcGeHEM5Sw,+MN#!g=c`,[u?ٓ7ܖuKD# S)5bW$joyO!l)E 1-'2:;[˵%L)OsHF4vySVGA$BHHhlHasI] ]Ң5ڼiT)TD2'Tfi%1`!AyiѹcE7 gH=kHȍQDyʩG .eXtӦ=+Nhj*"WGQ'x%:b &"XQG#9c\P׷mt_C 3Zg~hPB6 G"S%[<&Y3{9*B$y\YgOic40 IgNQY>Bf󓔃eT?G7S.͝rJe~Z@\lE}LfP,%+@P0Ed0A3 @ S70YlNbF:5@0Ь! H ӔBKwe+Н)4iQ( |f"|ʒ9)I-%S]_NNK9VIiiU l94=Xrh%0JژYU8Wz)ەDZS B!֒T-gj{_l,;\vqh$Gm{2o%ْF-2%K6?db[n]U|,>}aC3:bIp(|2}^ tQz.:Ծr&qHۼ>,|<|Ww+:6"JmDf\9}ߟ}?-:>6kg7oՏKu!vCv۷/3׶݂x0Oّa_|-Őg _m7bdGv?Z>%??k_1t!D#8  ?4݋赛s9K1­c IJ `\owrm{?k+ \xe _'?a; 0-h*X%/ Nks- MlMN:M꿟zƤ۠^D. Y; t1C" p欫Pe6$_2MeTdnErNB -C=ˡW0=(OB!v_pR:R" `"mIZOZuQ\m˛ DY:U`83 Ea5M&*A@Aitfm%|8𡋩nv{{NE0ytSbu\sF:NJ#4HNL8+<Iu"xVQ1~I6>]s9ǐ܏c_2nnF_F-wkn $F\&~ÌŬvC<ź0p} ZkuKa97֋,3@FM#F,p!#pǕ'\י0ODJ%,<ub *&h`PD*9@OFqT9* bzV؉5g?~սgޱ)1 xKf\5E^in9*Ws^yKM-7B*f³2f6+V$k4I3Z#)a FV^ z9_O\@\Oga`OoV$\imk#}!k>.f52+V!VzRKN阴[k% Yuݔqsh$7xEfvoc:o+qDi!JFi  Th-q%^S#@3߁gI|]8./ld TkՎFx:E DK VH ' Xq>OÊb6 mrog:C&;hXjqƇ y}c 2t#(#U5"fɎ:Jp6H&IT24'Z"*R q١ ,i2X~i  2N9EbGSy:LogO.O_wБPS0@眼Bl8H$JsZj:WM Ndj'9Ӝ%Qe@*)Q$gAO!Z橏o&\kaѾp5_vx5͆7)4N4V1;94Z}Oc-׋[9]TsѨo3ɤRzϼK4\ I%&&b.Й[b(,!J!$' COy!Od 6XM[J؄Atd,֜ݑq;JyXX2p,55pZه{[;jvAcGɧK' Ԉ4%U`)TjHR Q $d0Rs L'a pg9lr'!(fJmc"^XsvGl7\v184jQԨ]֜D@$F3*h1 x$Σ4 b:d΂aeaRYﴠ2fўGL%YKXĺD ꨬ-5gwB߆ -U)HN5$2bFdbz60G28El$ngnAǯ;hk1`(ŽF C4T׉quJ.++"@jպ qQIETp\ ?ֺ\.?V\ލY˵pAb"&L0;c$:'-uUngr۬x$7o2ioNY̥PE/ ϼI4|NGjVTط i(?\q"LUm~ GlL" 5`DRIl  A CzrXbŽf;=,_Gki~vv}9V #@(don$`2¤' Hlc0JPϑhSЦǦJږb@(|Lz2*F &z )i*dEx$Le Aznl $ rl0IψO91ESiYbٽYg$Hbyj V${ND4U>eh֏,z"cLu.}E!<2E.J!B똼H4s I rWQ"BTY*cCpsڲYx}^4} \>^ǫxu}p'~~ Z;n;v3ݠg= `i0`Q;Uf/)^_+p9'L%#sF <\ aCa8}~&Nǹ=SϾq~os=o`vO^Y0f',[ٟ.hf'y1 p`6$ۻ鸝~`xYռ0kײGw_ ~3CSn/Ɯ`>gAw_LPkNBckͬzX}%^E|H!5ip_D+̩%1-וWovUgLFԧRZSK}jO->,&pW lUcVy*Y1x`O 2Y_Z+D swSeWf~ Rz?oV>? 6e _f$&FHlr?n-wC8vu5&]6N+ˆX`ʌUjj+ṪP1gkA#yDٕwmWGhrucڙſcҼA n!o r=(.|FCC0H}|Φ3K93߉stXVY UxD 4dD܃g]3),C/qT9* Y@gEN^dA;=İi}8_dD`v=$ٵ"?`K=7 Negn1_12=~:ʼnb7-NL=&UbSF=iJ- nw=[M;mr֖kKxۯlݬ_qbfghpDMӔe`Z#7a>},ooi|:Yzq㞫 .yΜ_x&͹+{h~n2p3&;悕vf&zFǑAxOPii΍(N*:}Gn1aEuU8y -X}~%&T˝.8WY-\U>:Yvv0ǁ|MF4m4ٶ.*nv!WLRp?{G;k5|Q\K7+ǡDV-DIt2e< DMmf|޵q$Be`G{WC&uelRH*~yEsHjJ=l˜tOB4vHq h?p#q5IXeʂ*l+2rnI##EHFVQbr_teFdػug`vD6z'IU Xݽ [!'a4ЉWbJ [1sb$9$D ø:;w{ sL9f)R]Eڥh^s2Dž ͙ۋ8q+ 6Wo#rF 4HdIz婋ʻTaxBy {zeve\ͤq<_g6'D&{aqDDh4q ;9Ը 'n8LWpDF$]b36 KNd倞X.14^- RtTƏ m?{S(Ge?뵋)\ 9}>nؐLWV_-秣ŏY uI: [ECΞ\k>AsBK_9vkrnG^.!d܇~8;Ļ]p˱DU58%TFGt%$ԡXLZ!t0ґR\e 6g}z<Xկ'N=rZ#e.I녩 MgKb7͇aGR >Z&ݠpjٷRX-<^$3 ~n=݅hiK$$Nm^̮qm'4 BDfrwŶ Uo] o\Ǯ[Ϟ|ws!bCfϛ]fa*[<:4}Oŝ3gϽSzM\4g?uۄqOepQǠ0f ̭+32>m8{kv?}bVx@53a駄:vh"fJH;(B7"ƛCDܣ7~"▲~epmxQ̬9|[[B+|؀t(}]CΆ,tiƘ.٘4Y\}6is=2&kJ[#Rs8-8*ǟ&_ vmfKW/>7*8C2gNf^!B zghh%b1%%kǑei`@:f{S?⤂?Cj6_ A+w^jyקggz|u߉$s^1Ĉ80A:$BQ VKeNu br,DS8#!q#B\[D~䂋.۳~Kn/dIj;k &Iӱ*w5 ֛7%3QMJcވ,;~BB8.$?6U_v/t?N`뀁CBRG>*h# SΠ,E#s%ڽnG]CZX' 0;CS)V_zM"8jP lZѿ*(m&]?5Va(?qTJ#8!R<}c1Ϥ7 U0|s:K5 OIr1gIZgKrt=L0]fמ g޼g2j92!Sh JZe$pd6II/m`K4 O?1cv"XUvj٬s5M1Eu_2/ N On^t@UΝVyv团rKj)O PrFwh}65xu:)tRL7>kQZ򴒋W>Ujwpe}OҔQ3Lt pӧh%UqW4<0# ĸYdhD LDM'&<ʸt/W;5ﵵbܨŊHMƲC{(_'*Ҹqsj,꽁z>ILZ[Q!! X̬RF94abn t9K%a^V oR:.`lbxIm6޹Sƺ('yy9먺sk946|j T:@(Pȼ5R5Ffe@6`igH!=0] 'rG=\{ȸ:b*8⫁ $+8SB)83pÌ",K cf{" ܳ_xM]Dot|P.8>J5W]h"Y"2BZsΛlex:6e4V! Y )YƐtI?k^~ɉCR %qej,ԚfMgI5LBKŜx59٥Rȃۊ/#>y-zQWʛ:'&{cBge#uL,g`PzCXUh Mb\IdUۘыR4n\+Hնզ2G)Ob IƮ*Bl^m kɗ\ܲ0LV/]^^xGJ /9x(r.z Bgc5QVڢH,iOdV (^HQ̦46半V`BaRhrH .jMg@.jW]6VYl!d4!1fNICr L #(e҉g-&LgI;=XekQ -q3fhp"hBhdY$ ]%T4!U>t6[牾jOvq8kvHjR]"bgICvvqvPa5Pf`*L=c#.*V\hv_ъeOd*)r`oFe 铌 ".LW>0)+4>ZʝzRoqc?uyxŽܬZqd7Q>aW`D^*%&Y$1B.JLI}XJKr|hm*DcJ!WgcNABCӬ4:yl.5]R3,η3pN.]h08vRX;۵5A ))\D뙲^ͨ㚰0dhX͌R4.%.RYZ8%}8(eIhygN;>zCȼf,oC&r"*!12(TH6e="ٻ޶dW 0~1d&̜fW[DzeOSMRˢ$۔-9 '[})VUU]]%Q1䄥 ^*m#$UH9 vR;mTeim: u²O‚-V060HJ!g(rԃ՜3$`ar<T 5R] )v0~vf/z-3NU@6DAZ\ﱠTtT9yz4\*Q1cy9alf(+qt~~3oN]oB &X,G6ʂh/9vLQs-9˝R 3;ٵ+^ _FNtg/Y0{ܱ %nJ;ijC<0bSS_OՑ▱m(XJs-OЅ("kCA&&VQR@ƌ!f>LLj9+j$Aq5[kޓv&`Φay~l lCb+eH˹-<+I]p`E>&AmVI XJ(l:ե~8ONq?i%xVmR?Lq-v0Pa(TIikS^x xĽfFvzslPlnݭtq{fpS[O/86n7dzumv[FMTN=m}sfv\Xv`i*-lim}=}ՔTn΁PTѝ8Nx[rrS Ix@ ӄ睑ZjJpF|$ F\^sjhE[MM3.0Ez7f2l{]ׅ,2ý.~u$Bd/L'wm)g)R Sw,+MN#!@u9K gRF0/ss&4x E"wk" (L`K`3QoF 6j*%j/wϻtϓYoZha}PK@p(cNX"IƎ3b xH9 0DH# T+ A %4LYI>0 #R"%1dNF03,*͸Jb,A"A$6^LX0H3ޥ\-۰?R"rcQrjzkSX{xӵ}+Nh0U1uw-SdQG,=<$5U*<ɑ5[$F!H:AjWlBηCS~Ȇv~3S,PgH}K0Ɇ%8&~w%a(FƅOS9r>)3D?_Nb(b_%c2bX1`)"#t|.aMQ+ۿ=.&C_J`pVtb iKSat-J,qQL1_fyh|,鐓pLHj*eL+5jtQFR*ywjCN4},ϖ6_^q[H͡T9L~nUͳӇ_Kw?'o'ي[d {mvQ6zRfywa3 Ci$Gb|HWMÐal^"|Aa0,uP0Q1AvJb>f7Y8duȦQJ gs2>]>8VKzT5 UFg޽!ߟ~x3Lٻ;-: LKiؓ'jKSfZ$-__0=.Lӂ>{$W`W]4E'A߈m MmZVW]660ߖǥ|B`&tXjږm6K]l N3 &e8=7P'YYYT`#)EpW}u`"5H{!8Q Ze ƒ-ƑXD錺L5a^}y&DhSF;t@p`!+f)sJDc!^A(vPBUxVkhFPgHb+.=efd9l,Ö,ǀN؛ƀsbB! ;p=/%;"L- i4D@AipA3ˉ*Uf3'_x:0t꫋hڴpzitߤRFRB12;(Zq= A)+ D]1Ŋ:lEDo'DUހ m";63zI)rMgo0EN [cI",] .2n96Vr(X8RQŽudXDApGe\$JKƇ誠N EvanBIېH?33,$\)y|STv ࣛdaph>0yrET<55m-u"{`c-֮o,vխ[yl4Y<۪S,\4=M? X xlKOF Q-c~uATF3Aq:Qsm5]b(qPyѳq0|m8sw(5F^`Rn0Tj8b%Cv~]<˙ÜOuOZРv(t+ss=:->ݼ$8& {1j1Nt* `c1RѭJM-B]kM̨K%ڀmTDJBԀPELt6KJ x<NbSlO% dlmBݼTd_WOo4*Z0술!'rM͙B$c ΰNa°SLg   v F+(@6`!jʐBD!eS&-zg՘ Ve4k٨ ih6- J7T4my6`UŒ5|Qe-o![tbFU,aRdg¯6z޾hRYWYIҍUB&vjUũ4|da0ٜlR0 B@/Z"u^nvMyF.vz7nҪPlO7W_`lwx~R}uCǵi1w츜ª\|tӦ^^M)Mkߏ_/:9LJ;֙ΫMg); #C7o 9sY04\`D$/h;` B;N7B@ n @BYekZle-5捯ju:dOͪE>&Yi6Zl|k9PRk ~\i? ,ĭۏΫm@F:F%A@(1AD(QKCS&ܐ@}Bcf!C^0/z+GzO60ibD:I(>qf;Azc x#DTL8vGA D*v$DNQn4%aAL]ko+~8=(М@։,+ws4zZ#2%KHbkFͽ% xr% TI%>fЅ`h0 @ |*tqԒ)x\:&j:#Qc.Tr|Tt|. +)$H#6wV)Q[` VG,4T67jM T9')oYx#,^m Һ6TxثY7Rr1NpbJ}*&Y  ,Tc8E{IWgWL3qAˆp' V劃[n OZ}>e{>5FY 5쓎pj.&Y'^#? Sbu-<|#16X~{s-z=5\LOg G{P:W+'+3{aR}@p|@V*} O[Lby}OZl}7]ŕG~mFO{YEYZi$1j Y-7;$^5,Tdӂ;kQV?ܤOZ 'wkk,50_0Wuk vu†őb7lX^vL CzWUdWROrasm4],mKq٦,wJx;g| FxMLAУicF0 o\},nqoI>"t-=kt ݯRV`tcCw=!%树dQUDv=ASw\?*c,TZpֹ%IE_oHԹEQ(j8x -X}vyXQ14`os9lӐ30ڧ7䕤F6GC yl>JPllJlUyy{iM'?Whz:q 6~}$Dè7/-YuɅk65UAgW៻W`M\^QͺU2h`UչN-ϧgCN~Q"gtˣF,hxL폖hasE&7`cI>ԘĉLTL0se&hP}%ܾ(PUfƯRtw/n0ʬvVmJn$Ja쏷K,֣ b!.m}~M.?a7I|ܿoԽJ\"Ⱥ!sW[cnGf6TZsDe͆[w=V9ru7->^`n-u^OԊ,k.~|quL3Hc!ǨE`hDhKu=Yvj0zYbB_M/~: wb+`QEC\g${re)fnmU3؀tD?mfvY'z93By]VՆ0/՛G.ѐ0n (-\ļijF!uޢkqJFDŽ#T@q-v&RRz{,yMޜE_ld DkՎDx8Dnc--c,HZw񲰞7V @i+.oyq8Hٔ |8ˠd^2 @E(Ԃҵ0&ZEArbY) z&()9he*c4" QPb QGiHi=g˨I(Lșcc9PP?a)4PԈMn3UKרdrJ.0Kn'h52eR1xKK-SK%~UGB\S ^'dֈs,"'<#g!F-F&&LD#?`i:&/KBbgw|J("q_*"1 񘻠 6KsQq ?y@}AbX'-xBiWwkT'$F\FXR>"EZLpo[u>ݦRE[PIfQg雒=5dji{Fl5ؙDdg.U62$^r3=$fh7ZWBK%ݬYΘ&5* hBd@gB j4 WiG p"DhX#665WH/f٘yzzՋ$A)s ȌvdxcBlc`xEZzz\acұ> C>< kp?bۢpCJ#>T8.ϊ7 A*"ҩ,}傕 (:ຍ<~'u>eˣV$g|. `YIzf.C9cK>*8;e$hMFLe<);\0't@0}:MeXI8>wdvg& жD8 7b PeTtRWt Xԕ1Tx*ۼu~ ڢ_o-ڨƖkyam" x: 023 n6aCx85$h7xquN»7ݜc~[6*uetխ/@h{ӥշvtAà?19*=]rG4:f;uլ'+zr{ۖ~%z8ޡ'p05缋=?𔻬7>!ce˶9?xcZ]R/i3X3;sqWdDtFMF/=3t'PEw3Dʥ@頇3Ohk 9ruz41 pm= #VJio{ah@W;"Pez\2NkvFz毝-kj"r&52k-eQ^2brL=Fo;WM3G:䰻:3xIաgWN'ZPo)&46hr*,T }yr]ou"ۄ>^%%x DEUVxNWO{ִ//yR7)(P]!Ҁw _%]g6;eYJB$KAmZ*(3;iDO YZ`?ՆgW9fV1{kn/I aNJσtp2^L%HbȮx XI`,I0p C)6TFÅ8c gG\}r_X첌+:Z뜻n ξ;-nW z eEG].#9KG`A@Vl:t<*H ƳehY8*`J:$"+ uVwl~(KMh(I]Ƥ]`Uf]>2cIʑfqpp'Znν*RO0ME\O[_+*ʾkwKy+n93*.ǽE)rSsZp|jCΊwhxuj{giNӧ;NjY~|fEpf0$n>wc{Jw "w:ئ=ԍ 7'(,FL:Vb9ާB뎷LOk{l{Inu\Pp=R:V$> kw݈WW%$cEM5fKd⟓ƺםؽ!q?~߿۫7W~ {o~{N? 1™`CuYX{kӍs[/w~bix(qRe#S'~OnW:qĦeq łw#N757ZZxkT vW59 }_i|TKb? .\{#wϤQE#-՟]lNU;-pR5/Ov}WcgZX̩Ĭ*'X?ۿ/o)3xCDI\hVj_ Lhq"0}Kmiªx5ZIz}2&Pj!`r_p\BD@Cj 5jL&UMfoo;t1t|e%| lOHr皿 %9ӄ|wPU^s xU+1k Ն_=)X#dG%uR\Q B,hvR H*E:Adl:p'֩_wnze17D;lJ ڠS4dQg:KGnF6YQdSog̲6 {u4K18ot^ʜmuo%xcT mM\+!:] ke* *@:4kma%eb/B2hٟliSJRmVdQIUs Am3I:(;厐!G9Μ2ɠ$ js`eʜq ek@;tr_遼rc =[ںZ?S73+<1׷ǫ KJ4E,8K~$I IdcYktG$wD$w<([t*m661FrdMWnl88+%ht:6-e'b26m^ol8C/҉¬|2!}Ns弒}b;6x#ICw\y CU3{Hݼl)z8^NgS_\/<`heK(v|-H8Uf5?$bSMյyĭ njF+yU*`*L0L,heKeTJ\LsT̞lY*'~Z.{6ZmC*-1̃*yxP)DF$vJ0LZFj(OxEW`bFt@H<_q3GklQ)e9q@H/\óyGy v[*%膓Zj¶omLpJcۍ0hPzOz5AC-iX 5@nqpX +$E^*ʶp@p؞96m/SJeN%Q%uI Q;JfPGF}cvn5FgȰGs#ZawN|~{<£2Hq93DFy2ΏFs5M7Eu_f^9L\ } j69ӭMǣMFYjoi뉸s_^R693]& r張SU@뫠AA[#hIIG IS,%',"kin#GENV Fa#v fSQnw&,4(R:"Y/*×YGWus;rbfUW,_Tߨr!uQ tf`SZI^!Y*P*zN(Isgbm8{x{F-xը%^{cCv@ܙoto({8Ӱ`chhfό=1Nhx}f#^ IJ[8ArOJi^*D>}ш/fm <ڊy/+X溜G$((>wlzg_ _뾲ƨ՞|K?f۾/zZ>lBp}NLDL+xxb.ܺZ;# : zv@~٤h"oaTG㦺:'rj,p( %]4=#b)rIP+jC"TyKEA B h[ U֞vr?~^Ba/OD1OC1[|:~!ђPW+A8ʋ:*cJ0Lj֜g:y͞qೃQcL$e*UfKֲ:(VAN>we[C$Aud f:`m\!ꪭ+EN5Y' 3HZ–t$Q+hhіp&S.(EWE#A4 tFn٣GJMJE.n$[c,&`+"#]j*R|1$-P'\<. vCtCsڙ(62]|^&^dcŲ )Dя."cE?:ls{_T9bfզ6%b)K~gj)nZM҃󜟪ųjo[jׂG*2WXS׾r99[IʉbQ&WQѢ.ĸ,r, K䍋K}^{[2dœƄJ˓nŒrXMfSF/jk(CS)o6eF!>yxvM<_(l:;=CYl0rHE HQ*6CH栎5i'd[UTѳ_\a6F 6V 9a֋=q1BJQQo~i=<鹤W},yFr%5ޟn˦ͻ]כ/(ygW%"/Cm#VPI8dV  XH}_r~.1 ϶$4S1u:1  (W읓m8ߪ eRsRubq1qf%QdU$ V1aCbzbTCYm]l?Ҝw"/E/t'%YncTŎ7e3*(Z 0䭟_׼`XYFqtC83~C rl;c̃cnvy[)M]yٻ;q-?iZ߰xl![lf0 IkVTamՌ^Z} N9$Bxʺ90O$<.r7ŻwLw \#a+JۛYI%硊?U4`(F+Hn 1z9I3_I1'' ݠW'VI)^Y)&Kub w촥tנiIMʎmaVtzק wʝ=sOHޮ/^>阱|E$,k4|=vmv[&f!RCX-_qoj9ynŎs>xw۫V*CZU_'آRK1e<-Z/ǫ fK-m=ľe_[jSƧ;ME-7SN?^wc%:mh)TR9۬GP5!|] {l%xv-Ur49p L(Uyts6{5ә6=VugX)~[;^CcWB~Mw׷f(XPaCr P̘:V)S6AA{G|;+o[l}FCZ(j\fUNIKY~UŠbBWv'րt. 2[z_9s$K-ncX:Ufetkŝ bqbG'bG1{Ib`FBBjR‘LI'1MʾT]e%L9p17LTFEX®NO](IbeG9wFVT~;Z*9_ؿvCnMr:߷ۼc @olJɻ1k-_7rg2x1x _o+eк*d7x{K1%W]>搔K8L;Q <՜U!*Ej# @5fC kd(ۤTz#cF~\_A/8 3℅GϚwԘ񆜍1stwEY<4]Ϯw?8bVGWA3jF%hzPJ.bU 2 KdX`PMUhՄ"n j bQlDB j H5AV%vFJy!ڳ&BA6ɔ {2 FUH 쌈݆?<,X׊:'_g7)9ug\.NMh`w{'Irk0.(s5݋T쵲2aD`i`D>jb w sc%bDQݐ0!S&aJ;f2Rʃp-aq6l9=,WӮ\z\W~gԓP{?L][ &Iɺ=wT}!G#H.NP*1 ;$K.?\E&ޘrA89‚l=:@Nyxp >juK-c1'rxqX RŊ%sH[ {q,4O%,\(wp)B0cF@1gz4‚S8+O W{=i;qLyt3=/מw5Ӷ{w&Tz)sX;SZ%1 :KM(lzDׇ#Ƶմ`ֽWVGbdy}cjCwNJuY[jn<;?vk\*A90W.r&~ۦK0oO^n8GB7 SJ113w>VwNVJf{\/s.`2|lTl&{i\{ufDvUI0(VGᨔݓ"#?Vt)0wDqI̟`2swS-oV.:PEB?UTkyQ/~o',f xowѬ7@+=LMr4."]e7T-W leW'GPiѬ}\/?|AGHFl^ i pZzL2UzLɂPBєD0rv>2lMz $/蜢1ID# S)5!Bd(a`$2 p<]!TxGDJNcRɜRafXTqXFI$2s= 7aj8ݮ?R"rcQrjzkSX{dS$ +Nhj*">s8 4!U~'ARqGD2b!e`2245Wt]#{`]К*UzrQcBdݝp~ C1K?:} }mj!6"rbXI?(kK2&ƅzl0_?<5,TʤXu ŶQd2dgRƮ2`)"Wtu#allUy.85iayTh鮠l]&d>TIh^&x5.|.&ZHYf[S_ys=\~Tbkn^JAM_ 9ˡMïWT%9L/Ԕ. V溿O|0m>x ζtB/;so> Fq8ǹ]1~|Wޚrra C%Z[b|{Km͐flЪ h\Q0iN> ?/&z~018dknu1ȶVJ gS:K]a#a?`E:WaX];* ԕ F & _w߿K??}Lԇ aHhm]Pwww`Īǵ:\u}çoGN8z |i#_W `ViwL˓ ߷?F4h*iC9_6ۥ?j Q!mv0p7-rqf#Jl}DilO3La U eGb'J/mqX.e>_hc븳&DhSF;t@H:UT'X[0JNN9eeg:"f;#㠓S\;՝ũ $to- ># q&&h51 _2Mic%1! ջ4]KJncU$cXض9 l(KPp;:Hrs`9RqsՎDaS>" ˱R`9N(lkeZ30X쵌I @!5[!-qg뵹KhSG:W=:'&r7 .;[XkpOƇ:EaumvO.|xɼduJfήRʉI@h;5n_V\'-YH˺tL(%h'98jI͏sdSkotpda_Sq]E'N2W.W;w; &?w/ա+̭!!^Oˏ/ =D(R!J --% ou}cA 5"XЄz%fT˸T#g a-O07"@Skp D7y+omk]`Fh*"ՒlheVk("4;moo8?̲k7bލ3 1A#)L.%CJ_ 0 +H=9!$!>$swuiT\9Щhreqc pĖPƽuH03Bs4&L%`{"Qj0"\sJ] mf"22 +q~nOm$aW Fvi'x1x ^UU N,TPkPQ~WׇꛢVU+|I(ȇ$w ;$XO[Yr$y&E-e$k(Kx,5db*B] ys;TjR7k5),./#h<.CNm ooݛ-Ǭ.?jA~1m+ͮλ?Y׽5i:>wU>X>c 8\$I[gUKE5Z J~mmǫ9ǟ{ =F+g$fG6VvPE/z?w_p5ΨZWe=_#>8*'_Z ӟh3b9mdfspwTv~aV Qpyoo}?d)>uWws~~z51rx[;xuhAO&K(\&d. m1,N#h$,iҫ?>a4肫lWqJuZ 7>%`] }Bmʸt :Y#wZoec(etr?e %aVk[0 .rtci:&/u! %R;$R6%RkM'6DB-ME +$:'-BdiW ՎOhomKf_qfnË &@ZEǫt"[cEGБ<:s+B^ q ߠ`XHDιk)&6TEFvY{wʮʺJ'\/!EZTj=Qi@ì媽7,7M_O\)fZbo0: >^ao!sadݸȭ{]#]~n~>W_ scN|잖B®gk [>1Q`\vZ׊{k޶E{p=Ti i1P| ENfȋ.Rs=_K\˔&5<m@jhdQ$ TVGrk.p@((-Zܞj饌Q>hBJJ"YbQ%I֠BPƹIBl.2A3⩣FksN?iTztPB7$gˎ >IP*<Zb.( e0C=vٽ\Q7Imo19kUcs:iNNuB4bsw2z&@:.V)-D>+\hn= A(T1HL451p )K8QMnp\YuΎEg<~ rc؟ٓQ0ոH MtԢȬIqE Є7hTL9AbeOF JQ$oA_!Z橏I5ZjNѾtFlgx1nw"~vןNxlY yl{e뉜{WJǖ'g4oG.QE|3#MNs-=e{|4YvG/dBQCB%c-gI/yr~R N,6%Z>(!4)*%sD J@)C}>B2N ©P-lB!UZ2vX(c_Y B]eAerF٦7wB]lϠ4?v!/AcAs&<4%95"AIXJc)ȣHNHa24Ă(|k!{60ceɥ *mGI r|L+]ztKl7Tv1WjRT]FD@$H1 x$Σ4 b:d΂aeaRYﴠ%VMI"E=b'2 .$G:*k b.[~q(4 ƓKǞq_U"vGGi\x= 2ArBŤ @Z8*%j,NSB$ *&A( Xr5D%"jUH ) 9sߕI/#jrq̙|Ũd_H EZb(ޢ9x@$SFeԇhN (+%U.B.<,FCVXDX EWdZ"_p,v!F?)Qp̡7ǝjoְNIGur}>+ֲI3G}rsKZ4醳>g9hf_,4\5Kq09RbiB21)#knuŀƥou(AǏKN'NA%{ْ\0G1aT7aZQB,`ʠs̪ +<@c jku2`:}w@O+ul\BlzF^'9,'8pPܹ\OO5Am\3M 12dt RH#…rgչC!2a*:`H(/64=Uwrm4w\\Qr;w&W N$RS>!Op(7\9G!0HYiC9ز\ȟzx鸷RPXDiM^x\ҥ$S}v+PG`06 D5 Q#yBV8ZXc,`Cs iY3i F"\VKo=ͥcO]$!"D}Lj-R=YeE_5h7mZ#Fq#}8T]qwaCnfhlpTܑsP-&}儐 |hAZ2&(AsMBۧNBckͬ5Ș>8 mHH/ 5'z͉5'$k;>^XUZ!q p Ed4 85OG6Z8A"%8EӢ4hb<1 <s*(ϫ CzhN;%)17 ⴻ{p ta=tfTxy)JcdKIp=:sS'8_Xy%3lvcy9*e2#=˨nh]xs#BbNQaTOow+ɌE7Ly:&|\LRoM'/NƚЦxz]CjϨAq H?BycN(@7@$نI`OPռ/ oZV'HSs|x,/j=ssEX)׆sKOg)E`=72`GL;5 ]yM < ::^˽ *'e5x$!3EC@2v&q<1D&f$P˶EH:~!ɶNGeG9gJFɃLWIi 6 ъR-2B\dP '1,]8^a qf݅IX\p68ߓsHɉDyiwNȤ0W? l'8I#!szS3Dc"9-J"=r^P2RYFͩ D% c`%Q]A$qR}R̆Ԗ{ɄI^sm bo,=.x>38gzs7AZb5{#rǜdeSG$/ef\[z}0?g?\WZSB ~X$ ~5Aރ3)(krvQdl2E!xu_[l- e%ol$F:Uӕ }/e*i>8! Sw>m+H櫄+ӼV~}>>_0";;!?|4s&.mA]q*$Ar@һLֽYfď+߷^οx  YNArܭ튣Imiͥ.w_!]#)9GuCU[dVF U̚|4.zr7f?pTQw}3Az#~Nho4}XcNЩDyY8K$/~|wo㇟woQ߽wofJ{8_!ey 0c`Ø~yS$+b>H]-2e6aȔUQqG4pӊZ!ңEPl39-ۨ;ڿ|{ק׫ y]@#)ҩcqzR7aof%ke}%$B!oG=e^464 !4r!i`cM2@,-&vS Xspucgjtx]{bq>̟ݪntfɽ{=,1{;ZXeM Xj-F>H^GCʪ;M8|VkM);oe- ;u!]ViqmlMgnD\dbeg6?#2N zo=WbZy{:yi+`@*z Ȑd{69`lRl&V"6YcS U՝c$:iߩ{"W*c,01x eӾ9 >BυGWϾcW埝=ر=; @'>ү857?oL%'[=%5\s-CZWB.^HXތBi? 쯚ǚ)yfz|u〝*=T!jP#MY?/ Ž{{DlZꁍ'녰_<8ݧ_Wvm kBpa`on.wE̽srk\zF1׳kpUt=͟']ۛӳUU.Ny勷ۑ,x'ߵfXh8oi~}Mמay5ۚLy˨/>'OD˳_X 8UsA)&>k>?ZOܘߌaz.u:{}hh_;OV[_ɺzۓk_'U]SIk1DqkïdYۼ\iѴ):&\^ןG,"~Z]4[M58]n󔂣hkPvdYz܎덢zjDE' 3\ph1d&b?a5' aUn? Y{` b+Ĺڠ@K[ОwxU) pO_7nڣ7_?NZݸ㤴uTι6dҒ%l{l9yWy>_Onヹc>п488<#| į/}B}U$])y794LIW趹i^n+k֋)x"U+ (Kr7_ pUǸwnR@GzrWlRZr^5Yy%֐"_ORK}'Gk1n;V4s=GZ~y\mwm6p$ S܌Ԯ뀱#>ɢY>#!Uɝ~~Zyì՚Gfwyb>3wKc\{/IsjPd+ug{>·a9J6CZwǬ}q\>gwŃW߁Wᄌj,_U6v#.eozEZ`lvJ=;]2eݥA|c吤@ \U*6>2s5Z#[/ \W?K-%ekgi[OtV&)툽;jMrxjɖzwT6և*P55e )%4TO > Ӳ0\tH؍ ] \LHSZLpʇ|X1S"Z_ʺMl\u{ ׃ 2, k2U*՚l4sPɖ%W> B1rihxF"^R;{W}1N!yX!qx.T%AM\eCEl^ ^İrFL̔o?-BJ(r>7EUEոwO,!+F[u#9K ":|Ip]&PKgBr#Qv'-FR5e#Օou݆Vܜ-"&V|*@(`f dGkMPKeݑ"L#mTȗohBi wI'B),FnQxi v(*A^kO#0u`;o2PR6 5$JU[u}UaAbrtp &_xIX8 YFA\P-ƺ2B\Dj!. +AE0U. )YlXkgV@ѦJ,хڑknrhoe3ܳ)eIXA10RX9gY+UlXڎ99[4lM4!h}ĕHab`P5"74 V(ӫXmˉd$Ea҄ƎE $ؕϺr3ЛZk+#d BX z#>8:Q15K)("BAрQCK@T:JZ:`Jo6Ⅼ @ uS )n(,X L!P͒  Vߑ@6()1i03Rq*..g-=uEn,:#YxISirf%H ,BDfY` Ix{PH&B@X[ {e }plw@[-2RUdƬSBr|3BTԂ*NZ!&$_rw`xq=UW@.MPdY,8ҕk,UXe 3<#y䄰, _4TXPh]xOrF]i}~'KSمEǞ8+pmi,z Х[~mPnDBI \+$ݡzvea+2 hw0c,ඤ3f[R+y yzl J $, h Ō.#8vɁh0RnGLGV0q%s6, lc`:CLͤUQ2Qʾ JPRDƃwõjazT1aK6Bs/ͽ#H0MRsmHBmm]'X`/Okj%)?~!EQ|YTlYfOwOUuUOP2^D'Td.=Mډ~[>_G9j+ ʃI*_H%ˁgc?(+\|0<%4X'CJTHZmV*z$6WH! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! |I s}J$- O._s$V'J! I NH! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! rI !=%sI!?ZˎL h%HDH@8$@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@/Z*H :`Orr$PDz$4JQ$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z9$ݭZ$Ufo{'eiu>-oo7߽)l}/ js>Vq{s?(@zWG?HhXd\?i̚}=׫8'諓@u I]տPwz ^nܥEY('t1L>סH<7yvG[Qj>SrTՉf9*f+7`cs@ZEƮOyLȂ%p8~o[n;]~_ƺ_Ω 4M+1^hUȆYaBۓ_g| -\M\ܤLR@.fDx".WB&qp33)H}Ck\?R[C7'竲?:Bo&`}fE )6\kJj]Y.7T]rYo-&ٝ#؀p FkQ5^U.{ujx0p?d.RhȠVR*:@~>:liZMx֏5T)&\bЮ72A{ۑ8 r.?FTǤ?]Պy6 M}[]񬓩, %`#: ҒZZ%ȣ&m1Ngd`.K* l;_n (wry}=}^% Al\-p:TP8ىeHn}_Da磟=c?gVJ6J]g"qxg\˷\)<=]jn m? W˒xU>M/}pfOC6H,_n-5M+^p 48YWQ޶Z 2ϷEh$*mt^ͳ;tghEІ ={z;Q۔ _ꨚ=[oN T}/G ]\oi-}a6x/ #4>mMDWOgL܋NF ]Z1׏A߯& vz>|Qxs(Pf_[a"MRdiM`RɅ0rdT㮂owZg 7&IIUvonDpiJTBZN,O]Xyk&8{d^hN[VR9K983I,3YYGߠ] :хP]SudߣK]mUj2 _xWf|%Q-_cr2xWz9Lm&ewm*20nB;A..Ңv\*v(VOvyt.qCj뽂7)*[-[ҎʠLOyvLL̤8yˍG{æ{N;Rхe;r4/ζOtf2`J{IQCmwoSޱcxjt&K{Wl![c#kFxm%R*]m'vʋj"iWEУ٦ / X@tA)*45J{= %9'GLCھ-zlF:zf&SH\T'D Qg)8_^r{U2J,t%[Q 0C,)d۩iS5gOaaM=]9ȭ[Z/- &z҆}-ʌriQ>+WK:d 5VVZČ4وDʦY[V,*Hg͹4O*PYIY٫²Y|JrLfpTFJ)2iD G-T\y$G3W[o^|~"dLnXC2יUBqZUs-x =Si̞8Si92\t!J K-x4ɐqRb9ӆuA  $G)GfJ$q\@{#r]}*pf1PlFV}W!Vzz9_#N.F_&t9уITFI9[a)UDh@@*b0|1gCtBw,IY$:$+C.' d&Z3&*:Κ[Y@4oמ~='aۊ#G,DtY^G5h4 rd ^Un V*Rŕrc䈑D"GsDKArɩX|SpFc&̱ںrH/S|dfoM)zAyo2pՙrgWڳΚg8mz:uR?}"al2j<޺G;ϒ+7wbMAnUUGӦ6!ݙ|y[6;pLN7ӥ{siS4լt q4句9n;5"IMA[_ݝл=d"0器]nV8KR ќw^t7=KݿF>Mal89%P0A..ZVDL S@\%2!UL f DIXc{Cϫ5=xxUI K\-BpZx9 ewL0.P-I.S}l_;tXnaW=k@qAy&TE钥AHx% &Պ(Ku$p%Y0}+ui.p[*1%Ahô:kCX!5H!).hvo >Wg+"եEa+^s%wh)%\)3ׁ,{Sr7ы0[.ձ&B%4t,i2Mٗʉ$Ti> .H+FEV!i6SEb12rHrèfţNKބ9keO9.!2HZ&ːsn7g{Ŋ )+.LYougK-L͕IJ"rǒir'$v ̻*09/k0?'w\W Г{-Fo+#:䦻n+gR 餼Hs)7 >Cz+ʊ=MNJUJor/G OoP}ݻ]s[7W4e}81>dt3ۗ;I[v-9i2%KVJt, /_9__^ Zn)YXk˧TgVwJr8j-L5qkO/ v\y ϟ|ʣj, |źe +_6"Ǜ~|h+l2iykʛ85!1v@`_um%nA5e$n (^*3c{)slօ(P`Q0gXkLB}{֥ϧCxٮ t1e )˒N $X'LB$E+b;m;UL<][w67(ݵaP}lv3BvhlrNKA ZtÎVEvsH>hU+ם4Rpm5]1 i1Q E Ch0.&*mCi CiFa@fdl$21 ` F BJcl:|Hvk/ȴt?w{we;h0/iTtOoj/aк `f?ALWJfۦxh:UHawyG97.sw' % ]:X$i22#${ # ȺGoPd*XY[ RR-F %zRyC6Z;Zk~X!W<;.ѝ޿y.uʭ xZ9mI%#2Je̡&aLJXLL3vTJ2WG#K#$p &Jl!%Mq%^` j,Ózt#Du~q&\1?k){rY{xK]2)+wWH x+ 뙭F%2; B:~E`[V)O*ESPb@cTm LHjJ "h,jg%DDuMpīDkY/bEۋ'?R&|@.9 ƣqhSb-%V !e 5(QꕷޠoAw'n^5l$_g3).Bc]C63T!? dD )_B;SpZ剡]܅]<{L:=Ml(O|GnTg'~-ծf^9#_?ӓiƋ i6v؅U) i/u@?}ֺ\߯J;UvM~ulz{kq8Yɑ\(_+:s#K*xgGy[B1\HBjd&AyLLNhK"dr-r|%y|dMC0ODap;6o(wt ք4#rgɤ(!Kt|F{CNͳ(.{+fl[$jFR"J6jcE1]x.k:ѶB'ADNK~ShGY-!ɘ|j}t6,s_*@|N+/$_J [Fk7=: 5Ĵ3tx2%f_Yi6'4d"ע?bBR*u: KWłYjzdun[_mkVW9(wAu1p:(w_@}p;bNm|EpF _Ww# Mc/տEZAۻ!u~l5z70Zs8~2\aE~!_Kas{m-m+ͶGZO^#GnQ?ȗf;Mw^Ie" kZ0+9W]g.|;};rWƅF8j?s^|`+%~lOA tBJEH2SdJ!SVXN(R ̏h\5@XxiHR Tij#}Y&86 P0lIl:r~([ģ:axkV]뼋O ]?jjj+gInњDz 7Wn<+:$u1*FB2,`N lYaGk1$dX4`cPFbΒtΨMt0;AQ(2U|=^ &89FuFlKYᩕu. h1%ʵW,2RZĥ IkBʽ??r5kkD0$ $(3>T.@$ocΑvEa*fs"Um*9]B-%ȒG)$KqGEJ8W1V?"-F^x\D3 g[fsW="'4\|+*tElI[n|`)w ˘~>?eTXp:ܝ7=%2Y_cx&9%$PB/N5PE<}|1<AglV;% f!(EM.Ԑܬ_-hr~UpΛ`jyx~kᅕ` ,n:RS`n.nFuu^^ONox EXL<9lmU]%GQ4{Fj!֍ڑuȵÀ[Cit1QuN~1:8_䁛֎*ãorݨUIot\4ב:a*ׯ*?0 رٗ:Z;l8}ǩ !¯k4nu[^rU4eQ]d@%KErfl/k(xB(0(3kE&JҾ=sM!P~)v6R%?%f>yt=HINIV2vv4<(db(5ٲmmxF&4ݹts&ڢvc>epӁHݝRVtv0vt2R1 hOiz)MӦv۹|<$\ }cT٥dXPH %#yc <W=O8VũQ A&(F)hJ5s6eՂs@%"$c$M]nRSѩlB4ђUͪw no/w`wwuڛozF;vwQYIY5FѢoѠNX 24@JJKr1:r x1TDJ$pX2sE72UYc}$Q Snӗ|Bjy- '6q(-諶eRRh)>; @(qAl3\J=៯59}6XftX_19]T=5"0 Ðrui=HStj߉nӜwtYûx{_t[SWwo|C(W6|O }'w#r55lzhhnm^S~YjϕۨϤ͝U6m||_ʽTV !YCje|*G׷AkTF{0) -F2&Icַ`P-ffEdV:-8kБI;(ro׷n/O 2RM jdTw_sj .Ohk{3Mj'v1UMȿ9kGXR1!똲.(**؎x=خ(̅Ǘ&{˺b oG}s&}*o9<{aM|1!*%X=4!&CIBI%4ՎR4\!їviB5S8am/P@ʃ'NbD̄&UH.hY]Һ:v%CQkcAhX㔕Q Ψhkgm8e~K"8ʺ˳vvyf<]mpdp7> a扽>'1#_=_J+WE=;][)E~B2x?fأT4֬2 dJDř̥rqa$'"‰.ޛc˶rKu)V92b=h]2bN-_jp3qX=+yv%km7_]X^u%>ޱE1ZYt1PJ`D9rCUh|Rޑ)c.+ֵH`u<|}JpHdϣ 5D%fKt˨]Ϥ][Bsi_ .y%k:D aPnH%АVQ g2d՜ɜJb.a0*BU6DB$j^*??JC6!X&]m*|B+U`r b(ԙu ٭.A.<yW`+Pn XĽY_ q.jQV*J C&=/7ҚcX \ K A_ LZ0Le3)uRV-/|3]tA+G_,&8E HC4D4z[zŗ%_j Z@hTĤEsq5XtąLֹ@#6Qjq^Mpj~ -*w'Tވ =<Ϟm_ٟ" "5h0=b&TY%ɓrTsޤj BuQsMIGV[a*j2v@U@72v~dUaaq,XF,<)%Aܘ m9 —S&ϙʈ "%.ѳ?Hъމ"x@Fb:ǒ`+chb(E{Wv*j\KUهGݔIKAnXԎQ;=2؝- nF`QuNf8[ Zmm3 KISN%( *:"RpQM1! w2Q%UQ]<.sm8p.A /FG"U}q߈#"ns eu*F7AAMNz(**lIGιV hh,і=p&q Q * 2 !w^9 IMWg˂?m3.G\5IOc!y'LdtB.TcUf'DPOfDZxh:9Oa"cwUlx&Gu~\e_xs1Z 2l*K$sf|G9F6;B~7\~4]}VXb.NF^WeB<R}-TҢFڨhW`".3r/#>N{HpEhG s& Z]l>n00;HdVXadEv)s2q⛎ y:Icٮ:;cYlGEr@VB8(jƥ`xR+ׇ+xֱmPJ YFk *Eu_a0T6F/lj CA.E/ ]~+GŽ @BK +ncɭH;{~rp./ߺd}wC.V)(3"\&FLࡹPZ e27B= :GO ʘl&t>Dub UjHBvswLb0sGc$cIׄTR)d Ae*[΁`HX[Q(F`4SH0]erD+H3eEnc8"]QhzgT YuA Hп 66*@ы9Ƌi w-.nmhy93ܙ6ϐ&}9 2LJIZ29*w0  =GTDߛ}cI7NNgNX-KHDЂ 'TIYҩEj[w. . {xqZEDbCr) r-$_F/Ulj xCe&BTui䁋tA *<>g~qqm/}Mzm?|*Z¡-lׇ9;`ޙ^0hmw^{U}tc m:Ӑym"yEAt,ϳjuODD*(f0+)tEޅT `TK_sKcM}8NstL|yzח1M[Rp,d>?.n? <0ikO MJaC~1(X9"%Pj-0a0hm *o2( ι>FDX"B˛qz#0zP#jɷĐBxJ9RsL(6PLhF%ښOJC? |5 rKG'i :2bwIT- ݆s@ئ_ | %s\;2>oue׫M3#v˖~ICgyBv `z-BU Iq) )D_mo|6Jyoxd[gDۜN^f'Ң2~Z?dW_u&m0]gO$<0i`B%X་;}3:уX~xr˥nh 6VTN#)ş(GMTmaLQĒH0>p#Sp1f}%5!\6NZ) 6E[@m[˲o"Ӑ 2㓏ߕ CKx|)] "4L <Mf. ZeQ:!1d=tYaܡZn=_I/%őt.JGR<\t0ʹ^r*#HB:]hz:# zǠR!Sp(QȆ@4XusCTE ݫGѸO>Oq=cWֽXV$jS-wB  &2`3 \? .ۘ ##8VwF1qekcLN&} a̠Hu}ReYJ"GtNZ.kTF96f#|h%}RԞ;t˰?kʊ6sxG_jF&6Wˊ)Κwݺf Y|,*0yY6|N-4vn4Sz]$2YȦe N%2PK&Nɩ!e<)^ч錌 \;=d/-0xdNU4%,>Ѐ7M"ϺUJ_/G>a-~h|gHD#y%z꥖[Όʩ $U\<㮹-wﺮMk+h ɛ(Iͭ>K7}I_w^u7/XgkXsߟ[nveDm ̏;X[kMՈpC3)begh(r<M-lU.:M}B͐Nw.tTH|v9drS6͟\ wXPccX>{oZUS}UkVs1ا^UzU73!0a@`ӺpWW9V9\~ݐ+'J%#a{!u8KzF'Z'V;E`2(=ºyklGNrֻ1QGR@+ 0%R:;G]M&-?Y322t9U9Mu &s.gw#@"fd|2Vx% 54= 9ݓAm*\y@ӶHF 'uyvh<:R@s J$cnuŦ~q2,b:is"vL|Xrm*{;lJ ڠS4dQg?G6(`<*UM&>tE,k⾡[kP@tI-Cm_͊CĴ#"0QӭVYhe dC!y`t½SJL 9OO)PW< p1% fH9'8#c؊tcUx$)U91TIڹ \gȚ,B,@31  &sp!՚ݵB_@P]yY?$C;(?'<&idCwk9>IrwaFB |xkr;ܺw *u9vY|,/f3穕δKwi}Һ ;[;=fߘOV\gZ$g*۾t\e8%96=pt뜂C!;\ybotǃfzˑ˥h.(&(?Cp:tT} ngkL=T`Heڶ^A j΄OToNƻ2YH[JǢJ'N'e5G+LUzf"8pynhiϵo _<4nd6C&e{ҿ)r5l.D&U(!Ay(} KhޛHmHZ2a~`/--v7o hd}X noO3mqoqh]0@0ѳEMH!{Rcۄx6!j}pҹ ?3Ƨe@>%EzHJrd*S鬼rU.qkU<$<w!BdGkäe 5g7P/ 2rĕ9r{b|*{a~&C"pӰ _a^L^WÀW ZWo'^2yj 5@l~Y +uvcJĄ9' kȚUI̩92N48xAK u9/=|ywy.7׮.ޙ橜É?>ܶʹ£2H1n(3DOLk@m*K5>Gu_2϶sVƑMQvztu?2p/pW'=vL^d d׼w]7KVu>vONjt7TNw糳z)\7Vz9 ŭw{_H| hĔBC"1|Qui4ŔC%68e[D`%5W KCP<$( `r[1虔,fBZ%RY#KnC(ڡ6/$c}.9IHFRp >hlIi(Ag# s3M`0;mu 0s:pb,*[I/zyD$̇ZP)l՚[QtD!XJN=ߓr]j༗B 51r CN,hgg! LZYujO)U&LrͤHp-= |&#U2VknXT$P[h+B;{ή 2343@Gw?TJJThDrˈfmVN Z9e\XRͥh I0MM4$#\2. (guY4VGrV(Xjqv`r3a3R\GdB^hɥtRB01xp=̪d V$H@,:J$*)  ; QtUbH5_;R춇_)h8}-8"JV"Ο7X"nrIAD&yREk#CA5@ !q0@!JZ.q,gL@aq7AbNP4 հ2D`!O$7eW1?:~Tr*Ljagv6Ul2;J[҉RRt GT* K5e=]ehkdRZPIU {:*}~9,bQu.B#©(jnMtA%@T1x[ijھ}N:@w8x#Ѳ 6 [ݶ, 3DCPPM^&}JW=VЦ3 )x- 2<"M$(hT4b#FqJxql6v󗑂!F_V9XG1 K.|LP\&7}-ڄ6vVԿ\:n}9{[KAr?뾸77026]1;I*[SF#hcPy@ fXé8ly 0jDM>R`-RV*E,D,dBTK,ӷ]˯0ϟ REg\|zWgr[Vx/eDrtX}>0!ŕ\ "TZT:$-A&['ygԄ<}~qU&z&yoȣaX(m0zNuf1h uڤWJ y Og/6Np|82r׃O=G̬DGe!idFBHyO}`cwo+oqMjg?zF^:>@9d'ěNr`揼xщ&,9T뀢/QETD=9Wm1U;T'6IOjʟdjbGB3 {(WVw,ыFj|ScD sL}u9f57 ϕ V',wn%JqL4RLz}qgacyfZ;\Dg}\[9o-4no.㮭)o˿\|j퟿Qzig/a젣8Ջw?#uXNC<54V+L΢=t{.:uhO8Ɩȁl H Q`4ANE)?mvwLoj_YO--ro{з?zyn>*Gɣn|T5i5orb)kFTQ#b'+|w?~rdK|SK띰6XF$+{F}%d ,^^"lڳ4\=0Ih"IʌSטӮWƨU|;=_yژ_dP@ʃbDT] 9)jVՑ}}c%!GMNE"FNƺB1 p%e%z3 ڙ(v-qr,>ݭ9ѓ߳ݺr:>w]\'ᄃ޽+I`S3]|5Wue|HnF7ZzƨS1O%G)~B1b9Yu?-ؔd`= b!Mg%!Fr$y#öL}O>υ9+V*umΣݭ[&z$j@)=Z,sO\H}1].j1@"0(7$4$BmfgLLLTb)0 cV kvUu*;5ɻZtרZ -7| A UdIW؃WYheA-r XJUP 367l3<So6V`+=N<[RAyd\ Dd˗Op(f4e8##+B2Ǝ5]-$H2o珃%/hzĦX@>V5XkQ8RCTK6~bF.xg S4IJ6 '4AFeRn$9j_Js|4yp/<+KO|1ms@r7 9͘_dF(̀&/QP>|idۣx6{͖6wėJ۷' ޸Ka؟F?fes{m=h7ĴĴݙV?3I~d|t/8d( Z(8Qڝv^7WP>( PZ$;ra94*4հC,}Q[紱Q\j? `.u1IƷl{p^_6,ĵkps𢿿a=@Aa1ocG￘Lx3Py`=4[v⹛}+i^Xrp!EMR5-SlXOuZsrӞ봧[lB&K :eR֪\id*I *ū}fxfbFWMP!KmS:`m|yUZ4DRt E.#m8o PADhp#&J&y>Cq&VoL%P\#MlPQjj03C'6Qjɕ wF|yܝZ*9]вφw"Nlv4#Ghzw{ JV]5>ݍZ&ˆ^/^>+-צ%璙PP)Y3#lr>ZM1yRqΛT tA"HZJMIG2 R6=[/5&SKP4ZUm8#c?үW/ Cg, ސsc[P.sP^/n>:ϫ—o f_fׇp`VGWAs苩?JhEGT+a#$M,S!V Uezou눝noy*:Eݩr01fŮp2Y2hZvk+.WdNSN$( *:[)i Sd JjDp^å_'6`<M>m`:7U"N3$U[CȊ7AAMhz`_**lIǂk Ѷx0L*)`dQ ȀlR,pP$Q'rSZf%z w`9$ aAFB5V墜RRh u@pq \<ZծMw>S}n,seKl6>¢|+{q!% HW&VK"*5d\nݖmQ ,EeƒbPzHr|4~[W7+]v0G0AC%f?w2ɐ @^N#Rf4qc߈<_knXNl~[D^ƥ`xkmH/7k{Iؗ5J)KRS=3(CRP1 ɜi*I T&MT Xk!FFL6|"1RHp※YmueLC|OS0$mo;;j iz(Ur6OrOB}9M18篷ܴǼuջr!A"E8r[P4)1#mh=!(ku ^:-!46Bt:Q@ 蠵^t>u Rp1YYI0^]@@Ԛxˉ< +5\I1DC,`CE«yafv F"\VKo=ͩcO7H$CD#FTHSAJ.; VU@>dz[}nqdM\L'<0O&儐 |(f\2&(AsEBݧNBakͬݍ18 @]L1W rr:e3 ްvi %A* TZ!?oZ8A,|m)BR N8KdTnq,4 'igHg(A +HTИC)Q P)1Z:NWs=g{Z>7 ҫmwP֡T܋T ݅;oJ&KiEP TXMmpᖳo%`)k.I'Et9A I Q"DxJ>a:w؛OwΞDcw 7Sl^n3 9YvGj}.Kv66 N{a](bJ`fQ 1IBrN"%BUN0tC2ZTvDڍDI!-EFrKoUJ:*$&[^;$/&h]G!m(j[ĂyɖRB<3p]v8{j@0xkTُOO =xa<_OTD/Drm3҇kUCp(JWUMCq9=(`"1 |hC sڱpΛe79'/5Gu=wN#1ovR}Q 1o]c7DWyK2ث G`Yŝi]0Gr'_/gn~SbЛ:#V,TA2ZH%{zC(3&tF)No'N\Ss͖{-P|簃D$!K3U9Z `!1e p0dw+L'Zz}:+qv9>m32. 9š", :&%(D'R`zne.@v/=~Atŝ~[ rF{BYt 颡^@ @O;@DS"3JieN`K ?"ZA3Yh-iLȔ;&9*f9I%縅5*)ԧލ1ե{Ulfď͇śy28[$ ̅A켚[nvP6rV_{{nk(ֶtcK ַt kl-GdRF Y;h8OEap UF6:dSMcLNCHnX s4)ʯzL{lbEU*R4者z9_~>Ͽ?9~8})e?ޞ~w ̢ ]eX}-KURZ?~fUIYy.#)KgF^pQQ,U4M![۟v_#jۛ75loZVG].6?WZ\_RhMv4n3ݼ-/uf1o\Q%&P3-h*XHp=2 vc[ҥ(C+Ø4xҋ%Rsuj@*:fTDQIΜuN;;u58k}㝗z\az;LaeBOVt(^S" ` "-)Gsc+MJSy֣=]W_<(Z۶ g!b nDe>n(Nl?W"/j/Դz.ӂ qq;hNcԇ9%V|&˨B@bIiI5‰ w|F1`+ xP*Wk4U33ֲuwJ)]a/9V<"༠ h#PP‚7`Z,q$o2ԁ^j{xK}s%Q?@΃TЀ0$(!#YeHVqv6 8*=sl $ p6& OZĜSp}[vx*^@!mxcA;$ךSִ7M>O}$أ؛W/F`J|W 9-Ӯ x_S6^]LV,ەu3t  {9;[yv>3lɻ蘔THp)0A#J5"aԺCJ`MMAE[7R9XY$K3P(Ř[dm[v^^OFYjtxlk[8&,HT81]\I6O[^ *6ENK[sRf(e4Ns<QBsԠQQM XDRy0#ʐ)ѐGq26^rX+)D"sD)GRBPȭN*d˸9VЦ8MJw?*dn| 9ZCwpK~*gB;VYg6z޽xVYMX;[s*trcת Dwf =rg==nOKW֧Șg^ȥty7vK82>lj7Mﭚ {fݥo~GMc7Օo]E{N4 . Ҝ^tWFwI<Ҿy?^IRU_:itsn1/gAG7XIo=+ϱ,-al+ ?F~u+{in5* 8ۿe21~mukSBU\CyS.μRG.ܫ׾kpyJq3 zoWI*`/!쪷rv5,:s ^ ޽>rB(+BRm>[whKWn S=|oφ(pW3D_ayk0T6Bw\r'AD ι+YRSk<P#S D8 ·]FުZ8A,|g" d!C><"A[&ҹOX,0ޒmC3Z#?j郦@]ﵵ%)8GdGb◬L5dӽGUwGv~G,8xǹ蝝Y|!R$ SQT8y nk3}>{؃w喼vO,9_M#a'UMKN^7'i;ψ8uHat\`Rx4 b 5BhSh`۸ȵn^}i<2nI0ϴ窱q+ɻgrc ;*;-m[9I"-dՃeB/,qZdyfͲ wb! ;#ԣj8u zDc rvnႛF6^^ʗ#7\$[2:Iqh+DQÉC|I1wO;Iiq_x|%aYpau~4:4SV9֗RhÉz` 7x<9aDSPiF,wa^ APgpiT'6CA1$(h8`sN FuB&hP-ͤZ$|BD 9CEm:wm3%9"YAŠ9ksO_9"yݭ3ʺ8NQn&P-3\Yax'V<& u4 hq$$ #sV'p? ?~-W<diU=ym$XPn4G.H(wQphTB|yh 2_m1*q̚KGmNG*H ,2Y@kD4@uH{/&~_s/!hٮp`QE-;L*ϸ;R %!BUx)5uF`S`)!`Yд=ah:ca}a|,>ӋŞO&̽w@zQLzYLzdNŤ|5i6rhVWe8_&߼i!DcU.Iò#ih2w&+H!hz M@ok>_4ĭ"ӦMY;_N?]54o.ppO Zx6)[wMgHQwK%eZqȔY:l FmsstԷ):Q 5B<=j2"z={Gklԑz嶁=?4RHZ !oS vn}NNv8]Uh~Wp1?1Mgݣ6cVyxwS'~-ĝ jn<1IMMU!T9C%`xO&͈y3/٤7muεѼSaf"J+XWFr+- .o<] Ihp2h-= a) $ڠ@)drUrFMY.㒘Y|}!4Α 缛_dD[J2"|s49?Z]Z1+BJ/sOWY5.4&Ն .<{]+nJ9T>ux_0=]0'\ Z..V j@:Ueb^SVR{z,V؟89c;a-A>GҞ]'=( Bx3͝*%|:P6bD6#y. )HO cp`8eT!2E9DF4K( Q1mMB+G }d+\pUZ55=Şz.@$$(}B\SnJbvHrF/|NϰseBQ> V(rK.=BcHA Br&Q xupɔS H\shr*'M[.FAA2xJJOxDHiJ )-;Ø,4U y{(UWs'|E"BBJ}OD@qÿ"/ګSd+%ݔ7elA7xVwܖPgz397œ|鿿_}QyKTܚ%Tjp45'c.j_YA i[vQq}Լg>s39 _FgW'ycݬ+!3A*)V! k&3`1]nChR ѼLR4@Rr1/ޑCxF N)y9,ՆdhR 9IZ|nWwwT]=l@sNR)<7Qk hSJ#5LNJ+F,"xVxܼ̏//؅݂k uF3]O^[["[ F.qVXP᭪1@\$ۡu^B^dx!K OܖD+O-JXwHS @[͸% n8EGq.s 1$M$J")!gS;1>14)OssmkWP>xBy5#;g9zݛ`} .&[7q0dt>*83 <no6l w6-":QR_j(}Rb9hѠ#Jp4g@&$wdNP-H3z:w[jۺNZPzPP<Գ I1s(bb&Rl"d TYX$K>ĝP6Σ&I![ "`>k"uxfj^q@J@a .Fa މ^&EjK~X_{u\aG}][cY'p܎Gt:/`92ŽDTU!lUNarJ8TR ]o9X,GsD&$bIP =/BƦFC"ձBn$|>:+%38Cm ]BD)PH 9Xvtzn1r:ZK|Nhi5:ֽ%vY)ჄeIꭵx|pÀ/k+ׅcflnvCOz̦%=X{yȽ-d)d tenQd>]l.Мﮖwݭdtj~Snoy5wlzwx-nikj?0pc{N\Ys[0ڥ@yy=v }jS}rmgͩQL"W$-VDL SOQʐSYOĭ?X{\{"p6}@k.G*./,8>95>0Sif>|rY]}O /lumSD]q"> A@z .Ac ;r.J#P_|ob@%xsGj~EY[GDbyK3\Śp 2kc8^}^L 08~=u` "iV WߚjMOG5hm9w \ Ũth-׸{(Us{=]啟 ono7b-욘:zޯ톣Ym!m3i:giNCum2{ǓZG .dCGw9.<5uVFϺzȶY۞3)t2JF&>|v+ G^dyұǖjgP<`Ȏ_|?߼{27W`}.xFM_caͽ.M_kzi90qpQ*P {}0oYaV_Anj;*fe;t#V[M͍8<"Ea.ʒ2F? 1!m~w4{-o{m)#)N,~pCI"@l#g̃0T%HpbpĞxR* 6 zC=+R9μleLYN7`=2rZnENY4.4Rg\S`v㉗_цץ>yb8|ZmVKlvwYՕT"|ɒ~*N*:Y9Nhy'F\OQB Zh.MBrsI"hZ$Ȅw' \(Ef7Qf&Zc\." e%#1( b^>e8>/qy]fHDUI vSP̲&3Ubfi K椰`+-lK_ME UvJ8d Kc>v{ Ec;<4r ri0u,ugdTlܾZ3ǟM? zk̆~.ڒyYKٲ>=]Ȃ%zP8.Uq Z9P.iy(*T 4}BD X+ ч΁#h#338"RK.]BF]aa^0gixu*70Tǟ[ǟ2CbE6Q~Ͳlnmg]'J\`1s˕*,3 ;,i![|ժW}O{F{}CoiTrQ背ZdL;^lA33G2.5ۭIZl)ߒuXeZ{v>Hn *u"h!#ALKfKnOc{uNI,\ DXʌ[さ ִ1_hu=Bb?b#WR%&gIEτ`W*A( YJn4]%e 6-ǏW񫧃r$LyKdI)iO9 d}0fOd "lVP^%s cʖĸ,DY5r*OMY'Ӄ4)dsPW@ dunVi6&B۰--ܪ-\(7,xg}|q͚_8)"o|7uww[Thx\ eDE6tTKVs#rʸ > Ȓ=8!pD-pL&nǬ\:G,۴mà(XƤcSڮ-Ř+BbP+ Mr q%@ BV+t&kì1XeKzXtHT'dQs6vXrG3RVE"ho6֝aM꧱n0 oJ>6u⚰h-7b1Fsk$A0&)e|".s*Y`LJ֕% d$΄QS/ВFJyM'l;kL"=@vq8-O{Iɦv.& 6EH@d#s SI}pk-,tк۰bM!oLX%2 ?GxVk{7WUY 6n+qldzA>=i 1QPNW.XQ"#۰ݧ.s}uw*7Rg)Kv ޓA:=_nu~廣U%]޹!,݌ QVvΚ%Mqr6!#2եDSvk> d\f"T;M)媤9E.iQCȨ ݍiMBYz {[h*;ɡ>^~bg6S겗j5 uN}eż!ak?tM\[AKwգ? }o76׿]4Ɵ9g5?;B\l[8HK8i]#xˠ3f9~1,/8Z:ǫΠ9/F>.КQ u-QX3=?w7vٜ#u~T/Qk[g+F`gbW;?K3j򗷝qK'wJݻ9C;oH>./f]-YI1.: ";}|Y2['1OV=d9zF 4?}~]_:Ǐ3;jsYgKS8_l>7U :t5]K%"_S!cE)(Lŷu6;᤭RdmeYג"ꖤ["nH%Oܴ "=(XnYjTZ><{03ѭ˛,Qr :!<5W#KA҇(R).<CȮdqd 'bR$ tRRgV|ZPr pqcRug͙ھ>O|Tzҍ-uRˋy<|M,F~.f9zTDzXOs8gB \fҡdb 68$`1th;ST$d>]2Sol`:En4rcDi,7؂ڹT|C^ g>E[zsjc+zEtV[nPӣM!`,y5" LdD&kBlQĿviҴ"%4 c1H zшfU@dI ƴԬ"mrJ?&/(<@1SF[/0= sԚ3Vva~{B߳ >Nx\quWk 3^ Ywy,\妢 sgefc-3U;]fJ=,a;CDq` ?BٳBd1V9u4 ZwQrhPsP fC\ZqZ]_Cs,irwg|{"z7/ !pΜO~uONs+mq/O}6=?jؕ-IusKg+z@1aP(2e%{=n{28fetZ]5Va ?W#8`d?j-?I&~X?܉Sǿӻ/_y]{q﷿9~1p{/߽yAhFKѪ.X.~nX{m'˙խV/?Wa,?Oa%ڞ_(r?u+;O0-_Lȏ/^8T޴iaMk'bN!h RmB,à}څ~f,s[9"_ОY%EKxH؞t8KzF'ݣNvy`"(zaj]Z}==8Nֻ1QGB@+,Y'0p0%">f;C]K޻dbwd($}S~}*W_t UW! # XV|<%P3&,e'4@2R%!*d{G m!MO#T ՗o"QBI(%Yw ZGCXx2V)pwU$#wjyMoJ9醣ɉnj/=k6'bć0oSrpVٔAk`dEiU,tUFp(Wc2) n݀[u#]KA~D +ߣy!-{}uJFy*+o kS-Y4ߧO.E+GCw"BG>BMB$w߼4pm0$&Jj%YG(D- N{~] i=Tߴ/[H[ s$$"j)I=H&HPh#qQ,aG?'r͘;O7oD1Urt:׼]C='?fҰ4EPeBN: , v92>r!&#P;"Ժs< -:u!ddM\E-<)U un g}z|1ٵ'_wo=r+ZfHRVrRt|sZDE"ؤ*ɢ!h]We-sܿ2^0ǒkR{3%FM.FB_NM 2E2EhÝKV[fCk)H!E)8LK&%cZ[_2IHx4^%)ߪlnrU$ZOc8H I"%6́FީM2&΁̱vAh'om:Woc&&~V÷;O˷߷z;g6gn>|z_)|S<vO]fkz-tf1z8'=4 6>/0ocP_|˫:/|8,\OLqo OzGJ5zhjї1q{pmgsgNRn^y[SV 7i@ RFs& }JN܇зArg7ޡNH;$JǢJ#pQY͓C \bkurY&X:;4%V^7+HCߪMM>E[<@SSqF@D,ٿ2[s~7{㶹Cߙ&9d\YTus+SHB܅ȓMU3{@9,:Coa3%R7=Tk0˺b7o~s&~cJ5͢݌My77FnjNV784.lR Y"ˁʜR>!Cȣd9E <3`<]& !+s䉩LU^8F <\E%2yRwQ)j+j]$1Vx,gab.{=p?jsh||ջY7%ɉ?" tvҀN'!O:{J'AȈ9vC)'}bZE8[T9Xz>b^?-X>%8$Qƅ.ݿ6E5)&gi)aegNJO_PUcVtOWX/leOEGV8AQ5|TTm8#.:NʯҢNԈMˬRӄ077 Ȩ2,+ȏM=Wb~ _q([oҼV~uhr.]}:Ƈt5=ƕvLmu뫫ͿZAmH߫_OϓrE?wm}B q7Ѕȣ:ՅQxVZך{kvp}HLH9At{x_kd.)7lAݘ 0@RC͡39%K}?\Y.JwVGAĔw)zrʹ",Kn͡n?͆$hმ7vrM~p%3H_t711%eN\q[WavD<רtɔK>'@*7)8'+bs.8)/uˈEtB BBg59Ib&YQ"X/>2R>.kW1ٟPA}N<#݀i1 [zHO2޶sOQ96_>ֈ٠ q98P]L*[I/0yyLh2ZbjAa0X;L8jK ?ok[9?!x=vU}{P*PPy/:jZZ(bȥ>ZY.ϔB)#B ! aIs*O, ӓbU7]ڜk9W3ږ8-c=RVӌcml ` Oj 7?~-ȸ' eqo؅?OǣJJTh"eDE6֨VuZ9e\ʹd  ( AHQ̦&j&ce1e}QK;L'F1/jWӎcl`ŘWy:& L\J'eRg%"LgN)xp=JUI ĢLD%&HFNl}f,NnsXM%ˤb/ҏ#-du-b7` ALL̥FA|.?YժMw>K}.(ӥh6~#~Kk1:'&41"N"I+0>D.ࢷXC_$\/\N{HJ{0h s& :i~ޡM߅Ĭ@eM.|FDíwkɀ>SNjo:V|v\[`#egٱ,zHQ 2qB+Biק!f0]qP"2ǜN,>|C0 %zp CibHPFam~5qd+kz)U/vmIFD{ž]?IJ6쩼nCj҈`b5!  e"u2Juh}Z&u3f4ybilJAXDĘmviA;m`h߃dR +^(xV nu:cG59&p/煇^ghp 39g(rJi@4øF/FB4 6(h«x5D| ]JlCzDzj5Dҳ\AJHu&Heu2HEc>%J+hƪ#(⽮ũrR!5AGK|L8?W7sKg(gS*WFk4a*x*霹,тc\+52〳^>oؿ'Cou)8#fl\=uw?@3A=猫=lŽj޾Ijxɴ Gk8!: -ʑ_FZz[76S"P7zd9=g͙D /xz ot椏qz:Rա%qi263:Yi r2P6Fp \͌͐ʢ\cKPty;;o0 ax?ÐSmc4Z͘4Mk 1jDi dY̐cDCcP8 Zw$Rdls@f0X-MOlڔW4'$/L*F잦~jc^f6zt38WAU|\q⎑(iEJmцr*=vzs_wJ8HFִ6=SjfVSwDku&/2r#._/_;Iǎb~F̱{$#WlBH{o0ThK싥=} q l}NfS=wS<p)uѲwcj-u&z,w"IÏKwql\#0aN"[aaGs8$(~qq)rG.eU꼃Ux0sbw'=;$a6bszH\VlӻR7 u y9\UqXL_^2Ym7s+|JvފK#aC猰znԂ?n^ No{< O{ӯ8L._?;m&`ƑI=פم巘PxRC!N1̆ʊZm8uLw4)M6z.F#a s_-}2i˷;ؑdr~0z˽nìR&@,Q be<6xˀgcגc@Y P!b$>E $&yRkK5@ {9)H&rCV+=wQv:D4N 2>XH:iE@2B Նo-'\.S9YMJE^y.COB(b2'd!RVH"SAMD*wpq炇դ“Na{k.{^oE_p<;: KDڕ$G);YrmM"Q_Bأ/E9@8oe#}19uxCە~Ǹ|=Ӫh x w/z0K Ϡ8Hn[Geov\j纆r 9z/A#79abKE^sJOˠKI?$.g.A=yo" J&ܨ&rQ Ɠ ƅFY`H*ơ SOAUH.: 0ޫQER1yRyD V'ife-vspɷ 7.#pP _+uöV65x+1-'9gՙV* B8. |(0N mecMԍ,j :Q uUi3ϢjPBSJMHHwVA)9t) lfZdL%pMT )0Ew.:C*Z?A)vj#n8⊣mnڔ߽Ǐ`/8IL1(PL,WbPd"JT\v={ak,G^gQgfhLs+ CCI6P\,vSzE8OӠOK/%;wDg-mP1A0OOf7![CFXtDGSesHXJjiXJyJ0Bg}U |߬}.LoVY;|G^ ε'zH۴!4i`8R5|DӍV67hX̃x]%^wTM9dcxF Z1RGPمDAg\qFHtrV:πYkVq& Jf[sĭTe_4eC COoy%#g 7 ,R1#Qq: "L>$HEZy(;ngngZw5[5 QqR_BFq0&ɠA9#Xzeθv~~Έ >DVYZ,3olDZ,:&Rƣ>s;6`JfS66q]fqђj'K*HLp9ĜBLtc69 d6 M!:DRVjٯO7Ǔx\?|wuŦhvEgYe)1ffvZrZE!ڤ8Ӕv޵q$WOblFEHlﱱ7"WɐbT̐)j$Q6gUU_Z!+eGqG"GCډQ1 a1BH%upLHǴs}`O)8>" XjM\ e&#QHYu2LwZ [쵌FMͭt}(3r6+ۼKަ2jLǟsm) S7* I3j-rWo5k#]{1394,i~ ZYuI*$ʑIz\d?w3o .;D hv7ڛwݼyɛ!kG~Hcý3_u2_-\^px&MG7=),rO!b:P}!rsTt;BoC*j*CdARhJ"9$8ۣ=%l[XL^.{oP`HVtb iJS\Aٌ\sy xpv( j,gyh|)yC\L6$5'ͧ}M/FGJ0RqSr)hyCA0xֵqsBtJea}qOExZfyΫ|y}s =#| bwv^9o@\Nf0cn? NBm-I[hk6e{-(aQ4 '6{ JJͭ6lkp6g$mdFGRoñKEZ?‡V;Kvz7TL^T 8~oO_>e폧|_=D>W?¬p`\h#A;`fRӏ7(Ѽ/#^"|ᛯUGW-݁-?|Dihi*i:u-˻rK[n*>ﯻل(!s:*n7/MJdJ\/j9 q),R䑷 gk%4">PzƺY}ێkB9e3^J'T[ QjK4q)k~cٙL$@an|N~>qr\=r҉_ް&h/1qOiz)MtW RnK74aca4DX㨃4A+9g`,'^*α^Ǚ g"ԯnL:+%TE1#|ړ%X+G!(e\1a"'"̊)WNpBd\W'e݄uG @1aqp >息Q߀|H@i+=!d`aK.Y}zmPDav+9*c8#J.u'4iZ _;`>}e@C=7) fd;O&_><f` ^#ip aҡLkb/$;Ub,Y0!"=AZ3E4C')2b*u 9xu;޵W(sҎV=^xhM>cc]L~Ӑ^Y-bWϕA9Zy8 n 3lPZZ2KW}ǭÂ8jD4s J Iʫ9c kṂMlXuu`Ï_E.}f%ĭ዇&pAR1 +ϥ6bwl )u8T"<+o7[{o_NQݛ*->p7m ^)]¿7hp d 16il).5"(TǰdR[`M4 >-8[px׶(ߝ-h"&OAIm0:d7ȕZ)Gx:rCбSGQGCJ9ō8`1wahĖPƽuQf/hQ쌜bmwQ93p#>W[ۑ. W{]tbTs-_ 8o ~ 3†a&TLQ \P3RͶ|S 8?E^.@zMV`rQEKPL>F#T5qd ̹<8s .$gUq}Yo@5jQsx0(y]T1eWD]>>?,^}^g^狿n5@;^89K-}EjR_ZRN0piiG'd>$ͥjH-mP| &!|?]`fFb|R|旟`LJ/"4dů"|0?/bjv],}K/~VO7cn&Mva؆G̎ '_f^qUǎŵA ]Tb. `Fg.e4,-JMZ$!?߃z)lz tTb>ٕ49@ TkYO_!HQ[|kkSOa2 v6LߍbAݮjےukl+E?+R@{f^9ݙ UF\0sG-~1ql?l3G{ɱ$2AїZrV:AfG^xI' js/'Wj& &uD%ԖhQjc6HM0&$|!޺_1>Ot;yVj|J-c13AId%NDvq/)np0%sH[ F2\/˳.H;7I^NwzCl( ![| p8*gE}׫a&T[O'̋{aBdNAN~ ŚbaEr1Q:j(w>H H愀x05DA-~yj坵g-zZ"z?e< O}Gy,1%(,fy1>mRFLi`a/ tQm$2 gl#LsWy&*+Tͅ%(ޛ'JxGOƁJ0 9з2ŭD}d)6SM5XvOrZyi%aWuq(Ǵ _8pABLz{c/?ߍ͕"EoW0'p7ry@ݨ~mTnT`0֖l^}cL2% srf')Uuv"\%dBJg{@!=pzWՌg+^1yƖrMp+5z"\[N@('')M ?@|w]ٽƠ%V//=/k3Ӕo7ant~9NjRqByԵ9Paqޛ;32.X:٠eERJU]ݰ.5ܖ~E@Y$nktpUo DUX1nQ U,wG`<973[Zp7 vRk4~Kެ?̚s\3 [ md.X}LrBd1`7{Nk&([*< epPsޗ=FA>x-}K6 ԇMJn>̀;O  vi,he&ՆMrl sMÞk lپVTa(J ,QG0N4z<% "-x̢D^mSq6fT.$h5\JjVDʐABBʦ(Q ƌL^ˈiDk45[!-k՚ rS-#[Ba$$`ӁxjA2f& Ѣ Ƈ di ^pPfH cFPHނG0s̡tt$"] k$03^VKh-A(wfGIeZG D|*]F F*04.!A 0РUX[JF4L BLZjRXށ5\OH-Q߸pHIe&S ,Xk RnJjkP2D;;"&7$d % 62% 08=6T+#5 w aAy&:ːAc c 4wFqa'05!DLiχyvQl%OX`sT1QATD&R4 &"$o% 蔳H` !0;"gz8,W=ļ_˜~cW-fl}NM$I)ad-*2✬B#(.XBiXN+*e`- i]h k 8uTBYBy&m4pP*XCjMn?Pdp,cɩb ,.!`+*[m2mєA7̹? -2 5CPP*ؒqT2 j)EQ /)Ո&1bqѳV JӨDHX()z e09!j2a+֢Ǜ \uXgLD҄q!4X?ѭݜňd6cVGs|1|,5i9j: L ܹ[`CgVm8JhUDAKJ14;`ǃKu :"/Q%)z&L&zZV"a>X\$Lt~ tѫR%lPc&ڜ]5hW,HZnkTp"&e#s4HqإF u gh p eܞ{c"Õ,`<*Ü`#ҁʧK6|2M5͙jf -NqtU JB&Sis`* ]ӿ XmWG NkHk sQS9- Bysp/Ư(߼}H0Vvr).:TP<*\@%EOB)Wvvfu&iT E+Mp6>W+f)oN:XqY0S!;B4[1<*ᣚIUk00x׳Rg8 IE ѳdǀVm,2aFƕZᱺv=_85B'Sآ,% &}*p TMI['kgִtvr5WhSƅvMgQ TfY(N%gx< oX8w%8 mipq~vRv\=q ޖSFNИ{Ș?7#gyq)VCrN@R!kO8IYOR NGjC&Q>n^4mS{W6@Ao:efM(eteSte{\=F*ZT[6GjזHŝ \^ lW+ϫ5z׌>{ -|l$o<^1UǓ" h_/]'~Y^$ϳZ|_Ogɷc7x ;߆~Bȗ'u7wz7؍#N}:)9Ə|~y+>o^6m@0Wŷ.Lp| N%d[Ɋ{38JMb!x럏x fh9vuYf6&zmqy}\ V/-j [gd;G/ST?>_&_h?N.ONŻQ\;F0cۯKBSo5ږ+m$]ޅn??kӓ/.c~yTIU\Žk2Wۢh .kM4VbPG!FTIV+,֟^{K!0wu@={wԝsYcjS~(ʆKU5=#b.eE֒+]uU(XQ(v vˇ$~Bl^{yB̳|׽/Zg)wSp߭u5755盟ZkLv|fg=[*?x}wgw"YmCzVzlغ6k+tg8; 2XnRi8}/:YijbC *"%75"mWA`'yl8]L2]llpnj}CjL˧?HЮtϳc捳5? :mk7NNj"S%9x/]eڋ1cL-ܶب=Ru_`޾[Ip.՝qC۶vEYn3yWݽ?ةe%>9'w6}B٪nb{q.FLබM^#s:q8 z|3uSvZ&AK,f ì; q Oo/j{omԠsRL0 |9aS0xv:1mUE)Dx+5 mX {UѪkКVۅw˺ Ғ)EN޸h|bLVhOtk7i-F#'RG!kg4Y$<7EߎuZiuzyR3-˻npay}HXv}HO3I^?Kp\/a\{9?LZ.V>MG:.>['F n??+[@ԜEhLqԵk#jznGG?7.2}?}n_-!bk̾97?(y%Z఼nխF{M~lI>4{]i|zb,Nf.o=Q=aAVefQ/xۏYz\:{\b| .3>IBT)qg1e񰥔KTfDVYXi:|1s4&*pB%mr|hsO=G87h)l\kB4$1R.oyzɐ^!ywPLja* f^{ =ĞT#NsتSST< :3hJZl-YdBi4" !-k]vUPU8㾔1CGϕ*\T˚y1x=ضW,_VuǗ'{T6C'ʅ,T+-W4Bz}\zs@-o K?]mo9+}ٝk`>d37 6{/|%ZN_ղe- eNH)Ȯ*CsVTw?t$|r炅SzqZyZ΀{K9Cvr)Ʃ\6jMjF@)QB┡>g+S!ǂA$BR(pK6*[3Ubq.ԅuuQu Eynlw!X>lTa4m}]_6x<<=}ּ'$F\)V ,2&E(2 tL F*A dl"\$p2س,Mʃկ3A%vDDN )׆J(En1OEkC)M{ v+f>Q+C m2&QXrn82$hө1:+ +eӒ#A DўG,DE=N u~`]H2:*k bև _ʊd0})8P#rRV#׈]0#4Q.֞DmVMRQh-QpsM 44(ZԠ ByPg\p^QVQ"^٣LEWP/U+\tYKՋ^^6x"g`NIxȰԉqeh:zzTa18Tn"<;*g\׋x;O_5 vu\K\LޏB(܈Fz͇]*$v1RKa+ᅨ,h[1FR԰Gsd?9C;s 9*QS;;HcAnPVORkL:юA<@Q֝IJp&$0D26 HB r gչE4 ;)J_Fo^T?LW4Wsߕgx=wԘq>1}|Ba([p(28@DJz7 [!|jcX C JkFN'JV^z\ҥzXipU_O|/,$,@/.FAA"hME*Q# yRDHkb!Zr aAoWENlZhN >@so=H͕wO7HE#F7\HSW`*Up6(G#ЌJ:-<< M5-33#J|^|2Sz v 4:z.}RhhS a ܙ BDNJx O$nq.@ҽ'q}y,GvWw.ۺߛzdÕ(nݭd>\U1o]d]VN+ p_!f*b5E;\~9mE~QkP>w!Kkr5\k<3τ#:Sz8}ֈm\TOhZ':؁Sv!@pQbIbb^9T{BSv|LKq'ăVŅ~ &XMSPRڑ\LW{n|{2~z>u0m}t:)Nmq\tg6[cz4?; d2xL[F3$r}cBQ 5ZM}ms`6}jxmxd5 Զ;2tm~z&z.?C籽2 6]eoʋ, ֹvzۖ^ϷKѮz^k2ota{<"jP[r{ZtdGYo?xÚ鯚[v_'=ӗ?v_j!?6\|%_Y5W ,ԉND]il@0' E7R?|8PcokQy1K^å#cd,#\Rˉ "8'S:σ3ɀIݪwJW\r|ykӣwdQpIBh"A v&q2 13 jY!shkT~|Uɵa7XȔ + C!Ih4= KYXs ^ f4$if:p-\"z9`SH|[SmMR0h$DBGLsE}14~9&((J?G 6'PtH$ $,h ׌jNMdJg)u'lHI<á/L=_tqLݍ7'.=}ff?tF~}>,#Փ)+-y@$Û}Lx~yΟpj]$h$mXX}37ZYZ~f2}^G6:V"^8 $ +eΪGvѵ,|Ų[zo@q\C): WrǸ?o˵ hCfJThۏM7ƁDӋ[bm{ƼBҠ%-AmOlk &OJO}.U#>wgl'o0S5$`/i0X,";; }WjKeʖ<7@ێD^UtЌXU |&D`pTIyD?Wu&@|;5!a\c8_1add =,dm@Vy l+-N9N.(W78}~,k&=W"]hâPLWҵu~y""N0ц ?tZ0BVh(H1i!+LCQ6E wX|> f,C*b2+$Z$ttiebB }+<2Lz1زb!Y/!TPB*J}:%4B[[sG^ Tإ_ })Wo@ ~^GO0ߒ͟ϧH;?_LqFq6Iehx^]&uH1:7a?=?}׾׭E4Tܕ!c"؛=6<-ײq|5&<Yweaa ~UIp|D,lWW'e~|aL_' iu2lww=nV}tuT&[^ût-IPu$0*~m8Qf(t(dz{Mq<gZNAgu(Y Z`>_?}/pfsϙsOg?P0$zB*ZɐX)k, _b9/e!|Xj >ڳgr4Ú&OschђEp"{#!9 19oFkQ 2)NtrvQjXUAfփ=QcVA66,wszt8NpS n2tׇmX`Cu=lMV/-#4g||e wA!?2:e=$So=hչwW?~KvJE0E%J4"UR;1jgyݨ(V.B!o#? sh Tުⵊq[}7Sf7>+v;jn8 k|s/y3+\)}-BOȦT'eіTUHYˋ0F 3N>wn.ceo[KCC,|;m C󴀼Pe"6693Gwoh#Q(ՌT0&>JZ[ISir"7ֆB Q:pR2DTG&4H|wh(u }}RE%RCVhUIA%Q6b<~"ocsӟ<;KQ+xx(嶩*S%܇ti֚֡Viz bFJx9T)xgIrgE8`n-@5f~[D&dV LRA?Y@֍%ӏd:y$fSfL q湞mPI=z,6Q"Hol:ˏUj{ŵJrESf0lhc5A[$d*TA'KZ0bhg(ne䮯]w@uA\]PE "EGzdꥃԋLV$]hMJ R3V6A_U;Ⱦ(E C*jRq̰d(CR@TTtL>uvݚ1ú07Mw5j_-pV׃/W҃ztqQOt;#vPJF^|dC^∌:!5-fش^Z'_ί?L>͆kiO!ݼWJ鍲?# :iu%l&T(x]-9Jcd4[2qY9&:]iR-VO^R(2ҕRrN;LWsj8gX9UӴ|n8}dk9.nz7ɊFW_//7Z׆o{1Aߵ#/+SLwT 1m͖罵9jGL6$%ƪzgEoog$p-O)Fiv=%\uG5$M(kR5qt,c__;B?ƒ›]jxC␃"jο2ᇿA5^cSIVPukmEyTZF=7P4*,f;`hP@JRG1koew^Iih([0QL=[,x<#V] P's\ǴJ_T_6Lx6CF?dD2j#3I)h񅡃_<_<:?JPtSί˯nIp"bv!n~|GĠԡ{v.uǰ$}fjkn&ydMM/z&UtK<2]fP?9~K{1%43"A'Yi7crLl)==ɉ[}wQ;3mv0DaGiSKT}~Ӆf BD> 11m2:]~Ӿ; Η:޳-Yg1۷W&GȒG gRj, ThԡF 1^{dEu_ڔ r3S$J5o18^{U)H꽇߭9;oz,U~r[KΙ]А*o\]~|zW'%KGrE2d"˕1N<+(GpB~WV3d L6r*/2drR|6T5ȔiZ2}I"8Kq8x [*3Bz+ޡ#&&"k'sM_'&a)!+Au/e ej >QX 4Y 2)0–+c۪Ba|>]:l0SDg1@ r@t hZQQSiY}j|kqoqm.yMzm?|wMphC3uq cvXG:PO)XR7އ1_Ӈd֤r;ň.`Snѫ8"Q8ry2[+c?=t3/.7C&%PUKաOU69~ܤ}'0orz^9O$7 AxcyÜzuv4z=#wӑ6~bySI\]~꛸5:8x &'1@j]JhkNg g>F͖׳F0:s 5X]r-'_36f)S䭔QH,ρ`B̨bӷ>rPMJW$Er;0R3yrm1jwI ek5gdDi{~l*ڐ~3Ӌq;L[W tjyCfJ)to.29)u4N1OieF|TUGJ6&z*+.k)G\\]lt=ykG)C|x b@b5ΚT?{EN|?|i6|E^|jdoIDA[9$g#3DD$sݸAkGtCq?쥞I Zk\, 0#,(4*H՞3߿k$^5-VGC26u+kd׷ ioAdGRJ< pSf"/ .Lt-6A,-B1J7TP Q2'0˹5DAA^$yuhWqy>.@ e\8MeD,.1\`a%T8AHI_^O( ^eW]Ç8..|j{O5ba<;;vc\狍M1Hdyb|qqk^aκu`tnp;SvBWY{*:BIIpuOP]{/=/_ &(?+hZ$A`.\z $F@̪i+1Tż% \h'ۄgWL,55..kܻ1è̚.&ѕMOU^qP|҇ilbPhlAXaVA?:o~KtS L f=@Ņ} E \(v(6,7yeѸtSIj01jV PSj3{oHF2 턷1We&\iMUnQ|"̮;3ȵ+gJ}xl#r'!$%w+Ƅ xNEK7q-M q"X:lwQ^G<::h+P<>I`bN4qiigG" bJ̵5 :Rz]<0%r^g?.{]'+77&4EI1D̛ W<(GD3nT9s!JcslQ3o%c'N|>q̼Ȩdy'Y`I# *ƀhb"޻MPzFI* 4, -yA)ıH1WOpǦjgtaZr+=OUNX0˜4U)9>؅WF#} ͚E6Yf}lMy(E!]vVY>|Ge/&fXڇ<:{aD=Z?3_d QpTG~mPѣku=neG>JeGogcpLKz4|M*nKsV-ûtsv<ZIp92.T6p du,UK~'~VRůk=w*ؖkI$bB٠ _ 5/|ןЀ__zjV3+ϧkgYMIo4%Z5x^6կՋy٢q`;ͅA[cA~Np60|ZՒT+[b|yK!+ru5âQ0iJ^ @6ɚa%WJVNjuU_)l>eZI s`KUOː<N1ᬨ/\cuRTOVW֯3;g߿xo^>}ug_e}Z^56\^8 gJV1kX5,aXn ȏ~~Fihm*i:u۴˻v.U/b7CRiY^wĭp;+I,/j90 q),R䑷 02BGb怓>C\Jz-=\ bx)p3;Ux$cXX9 `P X,AkJ8 My7ˉs7r7/tuTmυB{,NIUSH`VX(r"ªp@D ru?亽(k=[Kz %O砫&6 yXKn"'Jќdsk,Q-PjP_cK׭d-^ҀҢ^&:Bm @3KGnD '"&9`h`VHFc"LԒP-bHY:5(B)OL(rna:2,"1rJZ u,l]B_@͕]Z&R>(mgAqW@YL2}@ =&\rBT1g SIŹ枦jj MyZ<2]]D}=v'Z9,CF1ҔH2P'8C3"#, v;{d %՛1Pl;e#t s3<[$%Id@*D^Gmi3WBSFj7 (k4Ih]kM̨K%ڀmTDJBԀPELA"g/9$r:l BEu;kpY>a=q#9 G"`U`6!'rM͙B$Sp{@'QrT{ A!1!΁e>٦`(c6)#sBujwʐD!eSX-zg՘ILAoe4@J`Rѱ9["Z7vD͵RAT!;GTye;k+^yEs9ǫcUu&Yf7TݬW]&UMC1'5"גV]I̯)\--O1f.OtyKr4odL8ծ5 J͋NAk!o7o~ ƙ 5o.ݢ{3RG}߷Y1|YGH"Zr;Af&Yrr.&y2)np09sH[ ΉuXԝtu/Tp݅e%FՌMLrJ͙"G0 6;˜>|K60MhEJfϿ-zFB޼ on3A3K}>g~GsjA3* )dHуo 4_SA>IL~S ¨<mR{ +<_ۊw*޻\pRb\Ј$ll/bqe^6 }OypH%#Je&9sgnuuu9}t~T*t8/6wRc)oD126}qI|uHwaow|sWg$]:08x͇hp~d aA9!j7 ?y<ǻ__9T}5VSnvw:4*Eyc!;w=-IrUūx5T16ڼ4t?V4tpCJyP?%tߵvQMA\4UNeIAT2VYZ҄ ^qn.t^lTSJ t_M?Ș.Q#n0O9WLg)6jpw;YmQ>So] ֡} 폶2鏙/y)'ɗh'_ӛ-R;OP~m[~ʏ ‹s&㓕ɇ;ud]sZEԜhs;C\uaZ-ZRDJkm(l5s~ǰuѫGwy I/жh߹A'W^_z}gkW q[>ߜR>,û]teݖaRfk_W;h{ YMB۠f7nbw]rsߑX4Ls:V!\q3LrwODg,m~34m{VF*,w#wW9huao7n<ϵ!=.`Aȡ~n= |gw:a쩪c[ l^5ab\sz?-# $Eͷw_욏RљCЉ7+İn7S-)z(IkeCo҅tt9 ,vtE&mVޥֻhs$Buݰv5y|bωև\^q~|Rۍ&9Փ,BD]=v}MVqڙ첕IXmn1؝xPt%\(SIU[rThVA.N1gKbқ]EE@>WQz7I.YZjACԨ&)T0nJkdlpm1js@& X,nmYiR odiu͛h4 cĘDhi{X˄,3+$"|sQ Dg)b/9%|^{ wDD0hՑ&:f%Uz!m҈&UB%%s*ڻOtH0rih*S=^bx6;[_&!}tB$b:Q9E4LovqH]o>"U-Id3<#KƜ 9X\B>s9jQU-wԺV[$+z[t!(sRB#Ld2N>k#U>).d5#xK00Lk\mqy.WPfX'餞.]FT>]8:8*k_c#7%}Q$\ @8`RJ!B Qў6TeGݎ0H%Gx@nj ָh)4"^PTl@t[ xiЮltG*A5V-Pbպ*7 H2h˳ǚ[]J3qfB\phhh;I7W3\i!/ +a =5VX4m:TZ{΂a UhU%`}5 eo Fԓ*eX|/b ~Jc -ljU!R%rSb ,3*d6+ f:8HVG!H0(k@oB&uTdL'Bp,Kv6PLF:@=--V !pYV Z$d BXG H rJc8?q#n7a?%17SL3: X(YLu>% qc-AN[:*]:∬ @M 4V J8 seԸQcj LY#6|Oy ᾬ kIt 9vTl^X:?)jB5m%wV4<¶MR xY+A";>nw<],]@,M$C.2The1h6F2= RobRnB%| \::$ճjV1 hcJI%hKcEX %@EW@Pl5kh'Ō6XX-;l,8^ INRIȚB\Ƣ(ΤI`&#E4Q]  V;#2Ϊ ת ¤0,,H1#dlD(! c(jS,КϽibI= w64IxFDj3+j譊#X}X@hP& %l@M}%U SaCjh\7֣sWkq9|(0έmns󐭚 :J%쬩_+`d#'QI3RҜXfW+tNb}z Zorv]w\9w3.ےg?`mW<ӰS\Ya Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,zR3Ipe+kͳ\ڧM W+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł+\W,b Xpł'\!mW(xB<̕` VZςOApuv]ggT3N򤾀V9{c`௳*.}}t\./7MXyMH]'%c*!?],[B |>; bK:>xyHg_ek-kunN󘃞kڢ`ҏN/#+Y @Vd +Y @Vd +Y @Vd +Y @Vd +Y @Vd +Y @Vd  w*av V*aX.aV(nT( MH \Mϭ2??\6 &lџ}?ۊoƔӓE4b|>zl6O= [Qo=|pm5]z+K󮨠kQ[3/!(H ɖ%zO2Ͷgi.怕ynRts> D+mX_e,`g8~9 $AI Okp(z $IQ$iTwWU^++?l+]2`<`s N,kNB+1_@4pٕn :*y9îy=,%>ev c)b,7B8 c6|,_f x|'a+ W xe++卌Ǣz${#{PLz|ZTOnίd-몣kIdg+7~"*"(\dq:MDREH2hJ][tx!KbVF$HҤ}q\צ 5-Pֺ j'\ۖw6~ X/o+k/YX v>O~%#3_#r1Π1ي}zm&(za JL5HB+bqo(MeF* }>( sWV8Oh(,$,v\`C`$6gR(%9gH(%b+ObGayaW\J! gE9˲Ԋ&,E?uq rO6qTeS6w1f i ҪB}ezf4ɲNBav6 C:!8疧ѿwxYMǡ?v6M?63=5_oڪ]"u'pqn t̀~}/}гF #J"ջc vrۣl.<[]M;~vtƬwD}JB0Kr4?'ՆӰó쏚O`ڸV=}1R펺>@TkATB6|L-6oNђ+ǾR6dU E\a͡]/APA]6Lr7_r/E]5@n n!9;K5myg+̆` fa^C5S\;cLG9UK/f%EXD7VKhͺZ´Cz:@ԚmBR;mGHH VATSr:|TdO]:Og .b.3jg>LLj94!,8ZIggurg6MuI׾s/X>w}?HWd燂i[Ey;GHPO*QP+';geTk4g/6bgr$Dg|f&Hd6GDz9XpGa{V~ $L靈>F;d\Uҡk$Ehc<1c۠~vhgކ ӎ_mk> ^y:MsJKSgvtC1PPDTj ʕB5ƒh Q1䥞I :^ ݀`H $7H), ) GFGM/{퐄ТCfw}u'PԷe7 i@dOx%it x{;x63I'QޜRB59!ņYmxD%QyTwWNףvK7UJ pö:p*^qbvUUbɻd5"0$Re!} l-eyߝݿezʯ8+β\q\ፀbeNߟ Cel^a7h8bvvq^Fކxzj[1 7+#w2rtRޫJ}bӪQRƣ0<~$z}Gr(#w_/_!&(?)VhZ%AtCjnoR^ä>{ӵ% λ#1Nn4tT wN^jFcC1GM>/b0JN!6UHw]QOw P3ifӟb-P?10}Sy}U-.bIjэiJ01%&1-W7 5JCImn-LgFLr;5j:&iUtmnfUV]$yl65՛V]ܙwu><G̾fڝ3N7xlN CHJWڳtuGT4*Lj1N#D!5Gyy`AX/@v<$ `sB@Hk-(HX",]5IAQxa~>=y>nݻr޲t ^WjEe~[g C(x˟t̄~Lt}=| |iC3ԁQqRrf)p{}2sWl#y8(y/ZVe#2:8&c9w& y#ڤ89zN!D!e!x0՘ `Le4zl5ͭ]ɽ7rvuYw:(^MU~zejkwpGW.>dm-]7]yէM:tIu$ Ww ń6ʴ:[xҳCl:c:n[=/qr3?̊bC'C $v#*[:&Gz- l66nszףi?N?'l7!Y論6lW20ux2ʉۯI&OԽ]qQy0hƍu2g.\ils 0Ň}^aa-_ڃY8fpdT:T1Nƙ Rʊb &(2+\S"0rHRIxaR[ERcib }Crjɭ?DrTf81b,sHV}aAv"F8L Aod,K&К4ÇsFӐ6,wٓӴ_iYfs3)>tW<Èy KF W;v76]m~  LX ۛWxK2LʩqVxvd\ ZU8HAVq^׏YvMf#L| EeVDX½rPOւ|qYcyԚ JUߊ ر'3c<ggu>ujL]4(?OD.5}`PM) NO6[ߣn^գ=}Vo7pj)VjGPLGPTLY3;sg0yIcjD0 _*Aݥ VB"+`{WǓwR08I8 `˗ 3|KVZ>ojϞ$%վTlRv##ЁDLw @|odMJYyz>&{\*IWI֜Mr}N\^ ,v4|#X7@D3T\Q|ؒ7)x0pQ&UTAKcfx$' (Z~p*iL霥;y<_΁DVP Z* NGt (WLU\ۡpB_hJ8bY>`V)D {\8 6jE0%>JŶI lZVv̋Y˱|dTaXK"2i*`lH%1QMKc5*@ErU1F^^?K.cTȐ;t7Lؤ&^֥2fDDx*QHf`:O}笻ޱm HsljB q_^cCnsY*W(FC ^=h<1O-_ı!KD#/D-I,3`{㨦&TKHC?Bq߃&w,˵9l:ws(ʘE#:R2 F"-m-vVxGDJ`1'Tfi%yp0 AP$@2Az*cN$bj֝ʐ*1`H{ʩRH<΂h+Nhj*"GߡF8GDo$uyJz, LFQ.(H >`Eq: &"XQG#9c\P#8`7#Z~[ LjfLFH6;T 8 󬸬"C z=FЂ%CXjX,[!y7uD0Ta6nmM b ΊN[n iKS\Af3?^}ɱHJ5_jS A0x1Y*~JK&fԴɽVn_'?V^O^5[,i̇$G'RI;<)S3YL8L3$U_א`=Ufy ät`ŴU;򝣳|ఒo%h[Avkp2%RVFҋ9̥ː"'x8+LzT ՝&̍Ow/w_՛߽;LԻ߾|<àM¯fbRW-n5~>b9*ӪNOn>{i$+Y]ش1kZ4Ǵܚ^h׫}b_-Pn`>o*~;ٻ6r%W3mv&;Eb~Y,Eh;q$%u1}%ŒDQw(!a݀@%2-[-2*+?s yI`> A9n3K, 8AO^;y =O:̹|KlG.0^rhFiC 邓^\tH8"@uc5E.&#blmUzó Н ݑ U_dRɜ !"}`XI"=BJ]5&4I{7YeEhv.fՂ\;MP< V1IUXiִO[_&~ΡBZ :vI>O}Nly(XFaZih6^aPnH#J"ݎ4|'dQ KaS*JZ cƵ AA h g'rߩ.+gOʂOx! kaW~g@7I;R1p aE>fXV5q>m -,9d"\UFg}8{]69pLk0P"!r44&PIoaٴ6-,{PeK(7mo8R:NN)}=okt%lUZ #~Ȅ8-2mL"(Ȭ1 ?)`VYHg@v@IKLg;k\( '\t\C:̃:I)DF|TZ0LF@ZqS|L@QNPpyXƷe^( 2B.uL2}ez7// z]1=^}"㉝ .36tk`UԍQ*Ҹ&GF~!UL³[ZĄ9v=uGdz#rGdIڠ̩ Ee82$(mЭAc=CdqM4}׷v_~{{+hydu3%nE KnSiAQHh2( *CQ4L{3j(SmQebnG yq`~.6Q52t~n˜񖞻> w59ګ_څNVqA\-hT*kb FiEJ.z,4 (MQsU.Hϴsμ,kS90(E9cI<8,Z%S$X sIbv&yM*Be`W8[l3=?h9b5zB{8>OtpɱGGON K8PD랜9ۏɋ) R*0Rm͐SJUBF+ߋi5mZvMXSsxf4K~p~|c2h#bR਑,Qr" &[iW<笨{^0o'ʲY)!#@ĒS :k4j၊7!\\lSJ94$R\XΪ۔} 0X* 78 29ڞZqJ5_X2v/t/ܫ/m|[C(y?;4rC WWWL8VD%%:+D#rLjfc5QXVAz"bIW>K&̊=HmJcQdO6cg3U=v<]:vھFc 6% u$L&#^JȳB P^Y}i^\J,EN|}!eX$RjRVaGgBc<N_>vD<Ή{xDLL̥EAT<` mX+T־s1C@Ԋ A.U"?RNAC@GcFYQ־U-D/:_uLkլdW+E/nx ƒR V]DŽ]4cނ1!Aczx,uEe(v{paQ:}]7G>,s.b, >!ܶY}6C/Z٩UYBI?-[P&MxxIGm~J̓i|yy(5+j=|}g.Ow|o2UM7}fv̩-mQn%58yTȄ",M(e/Б 楓^CLIhR\a{gnGnAFmd6ڞԤ0~K O7iUގ/ qxa3A Ï0("!,>6q4 C3NfZy տ(qʫ΢Ff5RsX/$Rt3oד:sY^{Gg´f`4׿4(P)\wEniO2OۛyQKW7Եn헟_nES`l,r洺 g8N›ttF#yӹY];˝xf~4?6_~L/^l+? nx_~.݊ӏ>z`mW_R 91*C7&ך/p.hrۋf`7_T'N$A򡏩Wn^!LTD~`nNߐ_:+:Ď:kd*CѦLqL 8n.~,_u0L[@p ~y/ܩSiP;ߐM0d|bij?plF\m˃mejӑvsMvh.3y6~X)qDGVkR*v|s}OS/kIrg'Bq1cw16 ,jFhB~ZeZ`_NrTڷ9E[#w~Ψyʭ̷AO֬Fcz1^ޯZV6qLH[&T_'B2r lcBPHXMEh) 1D6R-KOY i+q4(Q1DuIԼx9Y/jl]^g9QuAx.~F# 1H-ۘ`89 iSzc*4呫 sJK?hm8n~^Ǐ/|_2i]EO&?'|djOHw// c+8z66C9Kr0^aFiJ;%[ phc'> 6Vb ڏ'MnMj3"1كzyC^ponbYgkoBr^ipGcɐ0j «6Hm]N;sBMr˲>)l}N )>A9UM~T q,Z3\0#pr1@CU0}2GqV ǘ<U )E"\IʱՊ:(DDJ85 1N!pLZfjyF' RZ?{Ʊd`d[ &q!E5~TK)P~dqV_Gm`K$U}NuuՁ~nnZU*ΣW FPUοWdPkM]Wpu~oz4D.bT(3og_^wF+pV Z웳ξ~?+=9;^>YѰߋ)&DO)@^&`d1)~/U l<'_ٻTO$%rY|ӫz:( =7T"ݝJN>_!B^~l˟{7Lj'S;M4epQT#sPmoIePKzHF{}ȅQ7{A.?WU1Ut2W5 !623l/hUSewRŸ*ʦ:D+c();(! Chc99>Zw}pN%Jh+'NefOq(UVX@Uɟ <C rࢅ/P`~mkq6cH}WƆl1MIaoQ# d|V+@ˣ媙lHK1a3d@ Nh6yV$1X$9Ժ")CQ|w2vB2V:YYz,[񷡭♿cs;ˆ8F"cZK~?sۖ N%)RO$hK,͞p!iZG6mxTح:t#Jtۆ[ e"L,p\ *g}*sHJrܠ1צ{ܷpXVj|iĢ)\fЁcQD%(x6, N0dp̝T;&3O:fuh c dc2['Utz < hy l:8% zNփ2VXHPM!@,49.!"9HM, L6ʅMG̀R(X`D S9XM0m H%~&|TN Rt;ٕ$)!~L9a9M GTF 6f}h䴔hw0ᶻ{ >3ѣWĜ(rw9nx|RhZb{ wM xrtꎋF=MFA嫉Ajbl8LP޸jLčn6k(v%ά]`>R CIѦʒfaKwf3f5*]jouT 4t`%<)6TdHݾN;:5oG 9qLOl; >Svw gM}xVaU8;W[~ZP'LO&{ÖRh 3Gj|DVJGSbفlLJI7AP<sUO$@#w7:4݌L/酋idUa!Re6'dȇަ%˙ʦDK|SVjfFqU,t h'IQO>^?cq/ݑG"]ҖmCh\!j8.HeZZ(/ӄ[+4'4:4Hc&:2{E = :3qL-餑LS,! p1%- RD{ jd+* ZGIBC J\˒,REN# R[3wAE-ãֺkS&b-qRg\]Bj.M+RIm@ `~6/y$裩kRmcnۈ9%9Z#$.8|'--'lT_d>t՗b(!2db;ǨX*Y2 hBz&"eFDg@ O +vډ=1`\6jژ_ͬ<͚Pv̥ij! L%i 9;X6Y +@9!ǔ1PT$gE,dͨM\E-< UԷ8 p9ί}!Z ,;xE,YC!,;h#"*DUa"FM.Pʫ1ǎ9s'I0G+ *D)RHĒ&J#/^MLc%棌ڶ; +2ԆRL}I d&r Px6qgk99Nt)M{$ko>4CH |saFBmusnPᚔ=YtSt>hԉuHή}2ZR`7K{w74dRzw8zO2vfSf:[vީyQVj^v 9 ^n4;޸H%:꞊ea-ǝŵqr>5g?+z2S?Cpvhs#qBm\ls@O,ՖH=ьr1q]$8ͲҎdu da\h^$IYre LdkUzʌBǔ},r\2D‘Jp<fSR<{|l짧oo7oIT"Yr^Nm5z=Mj380hlR&鐙g:N)!FBdqQՔKLvw޾Ef4I~)e]A فo;~8kFeIT$6DO.՛6"&8j4XHq'2I\0@@i<&$秽 ٩mB u:G>8\IFx!+s<)Bj[.{6Zm`Bb*,*yR)QSB" [~8˧q@qt9v_^ʿmbv!Uԙ[0s `&utx*S̙*ݵ9['1RF2N9RsDFF$(v@=q k:oKV%a"7Ju (Q;JT@Gݝa-7,!7X͛k|.:כ۶ŕ6Jaˀ Ag/ E&Rd%P񀯨Fy"ul[Q4,w/˻Mݻls+DE릨f7E], E=t(?xDdp(u9mWH)"'AK%xG\? g2d tɴ"]pՎ*k!GY1+MG`'s)Caoh(!a-cʐ(,Fr|V&Mm)ʘNN!J}e`ZwfXnuB b<7'=? [N@Ŕ qAһHA.:z$t$UZ6 MС1yo7E 2.S"Dz#L o9E.QKh XbS:ϡE]덯A`MÓiWÉp<6Q(}C' >6 J{&̦M H~jt|?,k@]cb!AVhAidۣ< [嗾{N<6E;QWp{{f%ʤ.7Z2oMЮm i)*kiA=q-9m%;h,h82Y2"Ě ;hBvb.#/D4*(dȸ#:j 3'cpA2g4aDET87⛐SmA+!HBE1ۂ2(c61|hDԂM9uF+q6|:O癖2U:7ӓ]:yr^m]}Q MQyJH+P FCJ(ITNr0B`>)@K`*Z*kK(',ŐcQ$h* R8#c; iƮX,+(~25I(/¾@_%W\Dt8#vN5PPvھ1jgHĘz)kΆh:,U2srdjłʺ%RH^tԙ]II1FRIDvm0N*)cGDԢ-".7  aR&feF(b&05(AIR\.I^+?ْ ׾C)0ѐajD*gB M ,EFUH9i Ѵ}J-$|:iɮ(pqMOdYfd2Hɘ2Y,J8hL\. 6ӎ]P5CvE|௪;ئ !y(z7U w C=/*9tfM;NٳC/I笺j|ӔFI7+G}|y{+ڴgqhϕe.uis1}u_l"}h(1m+P4{DxR Q{K عߏ$/\\MJړDh\d߾Ve%c12 A+u& u*: ]}o:\~Ӯz[(z;۵'wL2& &AejaqIA IdF Yt_@ڲIY*%6'P[5Bڋ!Cz ]`~/K"o&ΖJz\yb͓*F{u58W_?u7칾孷9J"X!F ۄNjo*R$L X$eJQMd@6e1"BБ9{B,Rm%H.sXp7XXd%z!g#XA2 HZDDF.4{ {f0} 5reVy+@,cR,,@1#N J{&c 0п&uw}PLAv>R/-ݐ3d6l sϴ1x2UZ~a7tq^2>9+c]aߧ3Z;kf !uTL_R>wrK7P}`M[z6Aur:WA_y(^7J1N-K+ԏѺ`L4QdSˢtbD`gek> .Q ]z h %,`1u(91eMlӥHkNh`tIj{ƽƵ 5+~ׂCkĬw7?,o~zD6VWw4 »dCN2Ա;v''% n(G~i99e ,) ]*d$EfHa@\6WiL>{Tz5[];ͷ־^"[>`à&֌g[U r !ɦcEEISj "5oJxtv. гzym#aH)E =i˟Z,1k#r@̬@Y:!D TNq\Aw䙑[q!f=,س $!D-c9&# Kh`ͬ{cI-qd!EZ{vga:_jՀݽͧ}x P켈ڂ+ba267d Y;b-"͍qߩI^ 2 SKR\,;0+ٙw&-kaTP (wܺhOl~_ kIx1]Hi2RwJi+;p4eoO|1=?qCW_;4}{ihUe-zXhqu3&mg;ֵ`P\tӪn.7&qF5sTXF|W%'ǣ`e~]z>~bØxk߽ݪשTHu,P{)^'U\Fg樿{u]u'@R+:X'ė>=,h Fo붷"&VfGk+o+,(vߑژ_/G7K,⽧GR]?S|MKAGK׫>|EW{ O7bt+Y^ d:;Ag1EeVo)'a'5hR Ue ܩ<5`Z5H{.%b`Q+c^E%"1lA"M%DYwD-- \ͻD~fqoC:hequ~T~֓[[p`W^,}Ю"߃zE嵋iK=rH:FghP q/ߓW-j@iCgsTz6B%Eeѫ|S~(7гbm-5j7E?'naq}31-L+MYR;[P*CyK "6S c{%wo%m=,2aҸZKHqGSқrs$S Jʮ,m>mTZ[SNX\بDAzDE? ]Z5}8FBѶanգwo_N ic^0ͻcD1~3o`=4[.y7`L%d&.x*Q98PX> ֩u䨬`MɣDWr"fJۤUI[EJNTCorJs Fꐘ/R~wE$@^υƬ8[ٯZ{9y#NJNN$9v}dA"I"* PvYxMHѩ6L${mA[YJT1F>4"{j:L8|Q7cZ9Mhv[5ӓko}/s?, ;\}kN^i lW&/ESTަ3V =@*"%$HX*恄r0B`ۂ)٤RR1K;SNX!ǢH$e fjyqv4cW,X&U5I]|EYUZ<.C+Glî#sr*e#rKVGi%%tJ jM) Esa룎V\`7Y,GD"%8_p(Q!)jQA$q=Uտ4 *ЊДdHP (d$ju;#g>UƦ?͡B:ΡBzkč0p(@!{;f@h`(PHUHi*At;#gF:|bʄzLθd_H;֋׋^X L\WKD<2!B.q8 HBV[:t\׋^<}w\{Pa[{/S{Vo80:q >%7V*qsb'=Q{P _<*% B&Sd:`*Qfs3DbK>x[R?39e69M߫g(K]q{.:bib3(#nePL삏$/컃%sϳݒh{L%$ם0MA%Hf ?ϜNѕ"Q:e\ɂYIN~Ӿdcߴ'-VgGl/۬&"RaZQ"$#Am0C, HK$Ck(c$rqqs.AE`j<{%k~gh=\kc^uWns)i4}yhzu>m䶟;n+/ 9P&8ה[RZi " oC.OCypP)jcZ(M4q&A4ј >$vFc~eN 1 4y%lց9y?(cHpQZ)Q'4a4 !HysgE q_V!\QΨW@cH~Lj55R5{wuD3Ϧk)93'#H]ߏ<wQKF-iLhXZ-EaNe>Ș%:z);Hω5'>든99X$-[i RE>BQMǰSMjQ[HAH[Ə(N@(%XKob|y }U>c4L%,Jhi \I! 1RFe:%*EKZy=}W{=wpX^l(oz=:z~l=ӑǵD}<&Q)} W1M qXیFgGCD^<;-i_eH>m4wʺ[l&K{;El0e|&*Sh+UH1D:E渦h2#Ɠ7h@NX8W~Ot`3/bzZUOy4CP!4KT Zqz8PsO&^ty2\wac)3k/B @в yVR9#9epi΀Ipy/I^;&6h]G!mD= umٍva^G^Bgn4|ŗޔ%K!N\\vNY *\߂3grqlGU$/y@HJ:d"h[4JNRzgZ}fu$`y&p}gr,3Џq؆%AtuT ;p ;\1mpM 4|C0_J3 h`l!f,nk?doȆ ܓ;PC,_9wíi佣e<~R?"CV3׉X"wu-!}+ ]2k7s&q `+>QMBs8^q8F45=anSjvl]W17!nm}kqW(nudnD>m]ZSmڭ٘rf9DH{k?Z\ u-weYU'6?7UމdtyBk$4-`Epޱ٤6fev6 LRSGojeV{KMΰMMxүlbt>ohqmx4t _f"p@}ʭ4:nx('seMHa) d0f%YxG;9﬎,S~yE7t(΂J4d EsO2J ӎ5Vח;>^GJ/s`Heٱ$@7^xێq\sT'OhL -NӍl:|Qҙ҂ҥJc/%u^dE5tluY!^^|lDT&$ILF.(P$"4msWb&$ {IH$5! KC$Ec#dHюjgtTVK#7N8ɜT /B<1&\ %dldR m?h6قC? Ί>]60A2'Wg;*I߿*ʿ,d:[}뫢e %\_^}x_E`VN#o)zs\zwMoÛ_q*XVPoFdR vyh! Yz>f}oX=+NCW(<&@7ZNbLǢWd{ӫISMdM5(1#DL'.E- ]:7f_7~< ,$e%}}g[/^2NYVZrb4Ѱ57pjP52$qBT&TTL\UT t0elIKlKfQ:ˤ$|[!)53(|L)"zEz:E@(&pUcGX>ZJ~׷4 o$%+Cq9 < $vEٝBX/)w?-?#ѬϢ:X\ٽyKVPzqk(ܷr2ݍa,h)j6+ +#|o<7JUtƖ 8k S mRYz otBu~!RlغJrxqW^DJAZ[0 AjSB8L2 :SrD>ez_QrR_XT4Q]>AhVnD"s gA-c'k,&m6IM _P\6tX#5qRI.P3og%8ftr.Y%vʭt\T-W}U1n$@`U^slN(1$ZԺ,߻W1P9 4B2yr:UZI(aZԭ E#ďQX o'ţLNKބ)ie{z8`w#Z"͔=Ѣb'*C9䨞Wx>FסdCupp@2] / ѨSD>pÉx_ܩ` r0@`Ml@Jz_w@2oiZU\a6٭ؗJ6ʵHm6p;PJI6.p8>Ed}qkNMbGcC_ݩ.#DIQ|EdHA䌳˙6DШ%8^uTxLP-.4{oT wje &dmU-wb.'獖g@SC=+> /HTbBo1*M$fchp*ˍabJ܇ZvBmPY$X:2%y>LDkdPܡP@\N۪g}r< &iW+_߯-blҨof@R!a3>#S_4I`A{4Q|3*L MU&s"#S-.2ikhba1/[%[ oHZ2ګM3y h2PQZc a#^Tn!CHF9ܩ.wqI,"BلIeQr D&LB*P11Hg<#j&Y2@ deZunuy (Μ.Xhϕ 7+c> ZuuDp~A ߂jeSEsW9X*ӵ[/=ׇR dJ 8irutA)$8|J 62Tto7SeRQxˁ>w/xt{u5$|upCS35Z˲ug}p3:4JAѰn.!]LOѲp({OA9z4W?Bѽ,̪-9l W•s竟NR,iL XCʗVKQc`pV= rpFY&F:ZKoq%HsQ0w+KldWBDJι/.2"b`B#˷}xXnܿjoPcJ{¸:8GY|~2ՒÉ F0\_S|hʐ9MNcnXo{:Q857@3=*# - ur1$©(Ks HUVEEZX5% |sXǣeU_Cx>r_.GJO1u \RFxυ7zlm&̓?m{2/raܢݧek8^1)1+3 R_J<ʆC;~NwxBڱ.EBR]vҬ.(P֤Р5KH4^A P4'ײ.>@4ى9Ϝ)I1GO <;=1BIBdm]mUx8+#gL 9^921kIR|!%@b 24?S}W/|5^P8xA𿊲?qQ`K925n[ _ǻ7 M}W,M9ZK/X#;fͻ 35e 0BhgH:g2BdǍp{; 2ߋl9}mKfq)tͤ$.w]Uq2=xӬ⧹П4I- pzĩʱ)>#.L~|=iM󫎇>JJ3?ӝp,M͖;ZB;=8AXyuŸ{w=d^痓j^z|(]ӻ4ξIe@n.Gg?gƥ9'd]7D\J_Ug0ycRVE:cT"iKAS8[6å|Jsi1 }%Mkү/:lfwx،X R *n[kÀiM t2?`OC( 9k= x_߄XT4Q]M}3@"2 E"1$YPyc'k,&mM0CJgn:X#Iv!6>P\Tg0- Jp "1\ K2R7 ~rg&FU{%cCe P4*9aP7)$1$Z\uyB8ȹȵ;i}| N9ͅ^[n} vg^慼gۃ d]Iڬ-I%]+- Z,oP#={1~sy݋dCkKedsM#וI1q%NF%1dì >Z {Wֱ:S7dpNάۯ_;:7yau']koWF7nկwoOtTWE'~qg-Q[Htn}k'>ls_75~KCMzYF?ē$ˠrfhuF|DIR@'#EUuNp1=.v&oO{?FD1#y]8)1 )M$& Qi鳯3._8|'Z%_naH`Ccl_6yzó Н@4Glfu݊D)N< LaZE-(BjQBcԌHWsd‘bfjOS /6FG͈j,K 7Jd5`Ud'6mnޡ]8 MQ-YbH6}\ U^[8*r]&EQncS诘em@Otk HיHJkRlCr0цۑ[!(H& Pg Wu}-4Q2d,K3Ĝ}aAG 2* c}Ħمӗ|lAd- %P&QƔ(J&z2 )1(4mr>iwvkΞqjBI6׆<эH7#Hvݽ0w[Pn{\\c(ZcwcMlq*'^إc{7rkwC{l@hU[ng{oقϞ<;[L~yxEڛ lfunt <(hxhoG;uy1LcƧӟ <\G x[z)~+>?|#u`Z0 E.(hD2SR{To }Qj1uQ  -D#Pё NBN%fk+=Uq|aڧ }Skʤ|R e+;[svSwD[}FE{6{"yq^?ot{2O)zr*T ,v)xQuRE23yg-]fFV\?쟟j TK(vBhہߜߴҸ'&SLUشwQz(&D] F/@*1y%1LDf2ل7!MHc/g#Նb (=xT0V@HNj5YϮ XXQGR!`1$UT"es*0*ic c"@d> ղ33P֜@qf/ (ÀBt9gϳvg<&<:gZmw/)z}bz& _Ξ-Ln]ԝ1=v,n ԺH<$J@:STl쑀lp h`dD~38UQ&EU1}46jE^je7aiZIrHu$!;hɡv $xK1Ȍ;sk!S)cU_M"yK8⚣mn,*=vr~°z OcgPGD*~zҸP6E (T} "FdS ~I.8i.uˈ%\ڂKRT(2lT[ fItt[r~B/̳!Pao& LFchUNxm:  VXd.٫*i39ЇxqM8_V(lf١,6܃29 KmS0Is+xR9cEe,;c2芬_fx$;Lj CqT^Uz5gO^dW^(䆩M?=Mû]כoLJ쥽٥"xD$+ c2xVPFB~lHC3d!lnr *G .RB|@JRJUcP͞R s5G' Q&%MTMbK*$B̆؈FkA`*SXkfNb20Y1Y@¯*¨7=i.;c0ٸ@1kŮHze QTA qL0 Pg6@hgC F gsgr1q 2Ux|QK&*$*sFނ!h%j@;D X*#_Dz4M+lA_9krD7,:bV méBj+M5=}p=8m-dQS* (M!m:2Dq*4W_#Ł1ll&޹ c*r@tB5 J嬉1xSiY}j|?\pt>{9Kv\,7(o~zgzCR܋Rw4Ju/3"! Y0Ġp@teK m#GE8{fm.~ٽ`P|K8W=;-;nI;'' 0IVwzydIM\!J(b{z9I%*>NqFx~n3=0=9Yߎ4xezoeq7.m\xO6wJE+R3F3َ\+Dh|wwC N2Ej}veДCddBf's|@I*P"Nn7Y2dGeZ9 q%EiMIFža4svجs<  ƷmH`P=kj؁+6ߦEEF] xU)tomN)+$S1#>\e#%i67|BI qqg6x3=ސGaz==kOh#Jqir:絎IBt@QuNyxπ+#7b*TKE@-u,]QEV213xD jtE۽ b fwcF R40r$4/x1KNޜPI[Gs..J$fLE| طÍq됼FB}|çi98~7;c/N*S["]n-eϤAyDKvbv~]5+N,~?snz2>K7}rgQ˻Z<9$'WXkCu h2n5 4J*#ިW9֨G.,%=(ZygtIZur4}j~bPqvŧCn,wwxb ~^ti/7s޽%o{W@m/Jy-C1x[mymm}ے; --ܽ?,U^QtlXVn≜>j(OjZvʧ/>7[ƟVBnE\4'4~tZiWx_n||y6%v[7t6uE\zqb=o_ZrG5v6?iUfJokn߱eM&Oӆ{qV}3bٌl;5}ϭ2g}+j^&MJhlW.]r)BKv1\,?L3yxbJhWe#T-,|=]rl'8ۧ 1x3/v:-r4! g?ڽ'a(}HV-;gvgh C.'}vH)zDRQ+i"CjMDfeY6 F%N*c;om?8hy,{Q[zsY4-6Aejr*Ǚkoؿz8kIYkWq8+I⢇'t67X{scu7=zzC_ tUPA[H׭e_]B󲓯V |}LAmjNB]i횾p_@]jnP'{{+@Z:z풻y;XY[3t~yCz!nhܜO0rs~\n [^:M眦vx*h?;jmE@R $ ͠'5/&Fdb/(Q+8(희\[$"Y1#Er"8+Q{V YRBvv.R..9ʊ0H%/#j rLc E'`׬Bx@ԃshWQs 3o BӨnD X˃Z]n]y ڢQT cf6b \C";:*4ƅ5b:u&jF7.>`pKAc%$q}!\c5b(EH>yKn\2a* # WV) 1'l6UXiĮlp`  \DX0+ ꋐ,` K, u6;b$Ac2hWV"v\ 7AwbF`\J`a(4VH(@ Lh @ꌱI$#"(蠱Y5MC0dBKpPRp 3kɎ*Q kq+9nL:T_@kbdmOhD\ ̑+g w"=@W/: ~j"))m;u"R1eWݣ.P˒Fd0ȋ SI 5Ѱ#-d ՘2hV."H7M@VABT+՗B޹a"ٰ Ҽ_v^b_,B1& 9ipL($Rv')! ,}7ffugtz(\ikVP Q'kt6" / 2^,4L۱Ftdnu0@\=kZMU| [X@Z2 ȃ#J$bDNW%Di2aZ m@xDbBEŠ^VH "38pG*hPu4**`JU Uĝ(m dN@d$ƪbvo*yԟŝʑ`K>&+`QED_beޣ.%q v RŨ:"JP  ZE!0 P5j+t ((BkL(`,C)9j6b5 Q'q @AK^ ϠB>D9ki-#XX%:]ᴒj4x Dڤ xd..acm\gTlHMF2@~ЃA2CC8("p%+X(GaGY wEϢU @ur"M`kkNOt`l3,4NT2ZI@YFe6RҶWoY^ P렱 *w (a:HEC}׌ie&0^ \m F[vqX8hvyۜ`tZLW%״v\ה ۶DQ Evӥ&A_1D5f D8 "@N B,zPk&ժ"*(Cղ!dj?JR 7Torqx<[+ee舒.)&6(NhpB']˩J9h#8 %5U.7V,ukUz̓(fXm{TuD$4Jy4$Pm2B7<IwH)JD! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $%8tcjg ~4$V'! I Y" $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $@H! $%`H hC8hH hOU{KH׹*$BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $z$CDA ͕hH hOA+F=@RNBI $BI $BI $BI $BI $BI $BI $BI $BI $BI $BI $B@S!4?O>>\H;朦86n ؏ip^>A;dG)tj\$T8z>w.ğpc.ldyd:LTܮ&W[y Yh:區X~D扻Ī e0.tϞpcyƔcCb2:[9?L/WWH 7HS~O;W\2Hu4g$Zwz0rKnw0[V8L"#u4^Qs޲R[ޯ)OVoΗW(UN " .}3Ɖv*_er?WnYre1*ݻ ٱNə:[фyAZgGA$e`B.OEm _ƩWeх?V8/)<hW1]S,c\>"prbg4d:wحSba6~3w###cW|gqs?]h7a~tn϶%Vk5^/se,s^1g8Zms:4"kR\$ˏ21bi Qv$WOӬD ~x3)E!'!k\ȚC;== ٝ^&0AEQR֍RYh)'=[ڱP9:mP6kB֓cM>YVzu18Ɛ NI7 :~6~Yu'ǿ]ҹ?m׶O4jծ=;ܳdC1JWNUOxg;A: tT2ct;D'ySقIn?5qY}衈oS6 ˠ@˧="q"i0y|idk_G!s~[|zKW;nuP\]h Q;V1 G]QcBrM~+ BW (ϊ60&Zn3]x&vv*!FV-20"Kelh(ӚgϼtEqkX 170Ylig]}{fuC-meV) MuQ9_J_2ґǠS.բ`1h L[ G~ sqdGYACj>ĭ+ b aF RԊ!I'wf+Z:R{뤑sQ+$rTpB9*L0ZPq4qqR_iua23to2oN|4)^NؤLҡ0t-m[uki#s!Y׹T @mc lBtDD)/F;G-7Xi]Ei4W+úeL" E2"8CLqV+Kbz3uبNo"v@5AD4*(dAȬZnrve)iyiM5Ո-vk5qЇ1]#%|i5/il@s29t*CJw*Jw}Wm qK?,?W.p|uLwV K͛+!>4~Hur$i6TKDfDS8Sk>ӾD>l~`oj3%t\I]Erݻƚ Ac^ j x^(+|)Z ]4&gV[je.$!A42~)L| Tgh,ڛ5gL70^~N>,m6e [xH"llRJZ_y>θa&m+D6kqg&!|=fd;U^ɋó׫ϕTzOTs>9bބ58T{J5O]gYcN+o6\s5ؽc P S7-\)C9ieҦQxJ@n#`RX땛T\dK@S1%}'36k㞱ViHߪg mc_hZTmv]qOb`x4Ά??py8x/%9UֵFK5p3&kFunVN@,DEcOm mu;n]2b;j]nKݬ6zUsaXV0 QtBI2gn#h'> >!h:zk֜a/Scf'ףN,ǓlJ'wv_^{=2z[i6FY?GЗs$^Op`yN-nӣ ˶r~ AH)g%hiRq_?v¨wpDuJHjQN> Ϋ#gR?6< w.}t1,׶@'nvOo+=N/WkmY~Q[k-uEJJKYEޔtrAbTFqz(ù1RMid~cg3[B;uG*h%PYd$r" Z:$mַNuZ4 >fv}_@l B0YF t?koP%Kt P!N(+b\E }=xǘ ֣Lڼ<->jA?g6N~6BY%5*EKnIL' y6sROU\!ӷE ͝ۇj,랟޼oiBNW!~8渌 E O?qǕE ܄8µQlk9aSa0aOV l]5x4"~Q[uBTPx;u}mp: GAԽ={}HnsU%f _DSZ,^6V*ԑ e .&FczmyssHEMa5?ӭkicGX1ͅǁXR*DX.G7~h|sHƸ Lo8%ϔ8.׾#fEQO,?"95\]_wHTr=ޱp6{Ojkc O,R0˹aVΰM8Ly:O3АwhAV]m͍^e\x JcRw:NG[AittGv' %9iNpJ(A(ԀTk>'}S®=SwS'tKK#-4? (`)/ 'Bs紊F 抁6bDl(D1#Ed8p8Q2, mUlM-09{Y/qq&)ZͷX^0w\w}*' g>Np)1"`&H>@s{E4[ `+=BcHA Br&Q xupɨ5q .C)9) u49υ[.FApDe 2#U4R$_)#:e9 C޾3 JC_Ҝ2XpEQb tɨ&ޘg]=) /GRAgT"jR݂A:jnCn4ji ^T( Pi C݆ |pd=籲|hR1rfY 6 F sJ,h#%ֵcjS_NNsLۢWU[pXit.gtҔjN Z#M$FcRVapUU1di$g ڜ87/wPwGt!L"d~%gOKI)CF%oUɌJ$mZ>\2Ĵ+hDI\9 a!(q< Ռ[p Alq$=s [$).&AUI !=#:\9ç <}f5qãM+ 6>P}Έɵ^?<Ta錙`Z|4%׉BqZZg\x:]W;EZ;Z}fy7˽rxsd{D$|`KRJts0GsFќs4B"M H!K^X}A26E0hr#[.ժcR21C3Y*D aq:QNi>ۚ82ǹm>'id侊uo8; yJ5$x\~@{s+W ^ft]‡w=Ϫ>[to !w6+ǐ @|\ ۞J{9.[}X|zخvkxиۂv·L[\ݟϻyd_Pk-:qq95hzh$(oArqo֡+̭5k紟Q!KYwkF)1+$FKc*&+tЈAI3+ y>8hdJ!r.gJ15)t:zD8&P5#ڶmy$GP:GZ :a쉤#dHѶ7y:Z:(܀ɐS%Ģ5Ie!J᩷JB~Iy`*nmPԋloaz7E2Ǽ(/.>P(ů?oӢ GŇ)9P񾠄KůWRQ18.h~/k?PxbuoYjY}w6'TQ֠! z0o:^Tr[j>Nz]ږWCK&%+1Zvg;X3dy0T,׈ȧ( L CoyN8o1?_V,@& [}Vk6Biac\!1 J :7f/nLM> ̫zV0rs>зYU \hGk ԕ 1Sr[2Ncry-*&/a:np:eزʤ%GIfdO͝n ^ q ?dI1W#Bqg@1+ KV|y_TWwҒ\Sr鿾 v!FlB/5JP jRE)?e_ii?ӛY5kt/|*[yeѵ[i} >Ff]$8Y0L_dq)\GI [>'[Zio?ǨTn̦`!sj1\#txB;uOBکYiScQzӞiQUDVEl KCmJI_.% R{KE.p%l/csF'&(UhLM'8$* RHdv̫ɘsF9 IOSp›P\&"FƊl$. [6$&Y @)2! Jxl{,ŶaΦ  D)%4dJQ`p`1`TbHhϡ6ooEײ#4׈51J-R"@3&.Qq!D%f"R`iòɃ*xUdZ#cB#^'mYHoMN!*.hX lT}?&:Dy(hW걲2QI \rW>Pq=D{eΙSv4ӿs]PKh ǟY'Cu]*TiN?G ZwԮ"E-ďQ|F 6)d-JI3)ieVrms8fD_W(3e[#Z4*9?{}Q&1IQ=oRq?FЂa Xxjx ͊hT)"p")[l%Vhja6WTgVQDXd^LlWqlfؗJ6ʕH}vp{.mõ⦊gW3խy^kqH /܅~? {YۛHr8F?gk${~$(Q`'&[UͷŋofWՃld,4c˫ro^튁$p3+Jۼ2Cn$jGb|HuÐa_*Fa8*d`tFOc-J֎JQ独npi sIs &.u}3 0Y{g2ThMJS4kYX\|_W߾N~˷~&|ۯ,T1n =O`ƪ,/T[Hbk)}p՟_ {x2/)Kt'F6sLWuQ--6䛗?4?-ƛM=@"&6\3?ǥ?T!rdvaדi\O‼Ac SXD%6#oAzB!xƑXD0.y=hB9e3^J'T[ JjRK4q(kl]:gU5xeoi8,:ם]wtC}A5pݹmQ6=ʎ>.ՙa*3\ދPnBXNFPo~R)Z-rc@1X .&Ҹ0,Y|McWDTMn~6|RdpVX>*|6^3+Op^m;lٸxo&iIaeeM 8v]"bJW/Y%>JHMʯOsjA3*Z;N$F8'& E{ ".գ : W2t;̺m7f% 'sS0 ' /KO㿧vOEq*^$KH \{2ǝfd4PbBaVڐ6H|6?N0J^K{`#y6Tɳw@k3+O.34J DL{Q$({Q2`pv0IxC/`jEc fRdB|Lb&W/C ߭Vxפjao$W]@GKӥ"P22X7싟͗9w*N @?V<^<;NK+U8=xlh4^lBM=L?yg߽zN}엻q ճWff{;)]Z\_ IخA Y}Zat뉍]GgؘM^؋T., |GA>\a6vIMЇA"6}uG躣/ð;}SU-m_z?p?^Hݲ^‹y˃ғf7;oNWpc},HE4V*-=$?_J^87;G-#H2~1peYhb jѥ4;[%1 :s]TD¦CtgzJ-K'G9,3x(;ڃà|lNQnCVXur+LXVqg}fdaL)DpC,s$a>- ֦[0 ,7ͽ۔fx;$ﮚejL]Iv@$nͫ,jx[mM>L?q;/ApE S}$8jZA HWO g4]NԊN|j k2=5?J+އ5<5@Qۚ] 4L$[KmU]|:rSoBaFr!.bFxL V#4 1P4 -@NmLhl4+złVHkYrRAH)\1~8?nKӦ3z}_fqϮn]/r9Z2{=7PѴw{)*rQfQ0~xYYԑ 3RÕPlk)"Kҹ2C_UIЧ[ }wW7T3VݚA8J4ܷ~ZB5Sl)kݓFCMxSY0wD]ߒ-^ 6VCVL{K-uYGE-Jh]-9$=Tfj:<ϭV~X(ƌTL4t~x8w:O/8-~Y41ӫ_xF~k754%[΋zz7k_w6ͰVF9o٘)@OR7W-3ϕֻ{f>9J0o+iYǮxJnm8o_AU"͘1ݨӒLԴ"se^ Xt7kZYQ$ڰft֚C5oؿmxF'<1 !)s12S R1N#LR!5xގ|zf^BjSY ?s$< &;#"-HIո%Zš z- D.^gUx58ZхTQE\x\˄-cjæ5ОEG?Rt4BQeW"k!NZ$FƜYorˍP&'Nbe48E茣 3:I4sio̪{TMA00h [EZ)E&=A9XMS JMjϼPG#P.OP*1 0;yb$1"P4uv^?vZK)-u!|RiBH"ƑrW^)ǝ6*26}8x>KCmU 㱡jFgorVFRcI12;(Zq= A)+ C9V 6jPǀ0Y9öFfe_ڟџ qXGRFj79$ѠHZleF].lrDE&R*:dZ֦7O',?,=Y:'aE}^iYj9k_^|xO|SY|bD( P-i_`Oඏ…#&tbb2$2#QHYu2LwZ ZFL&Fs+%m 6qIAǤ-8f,'B~sþh|6{sS0h܇<}>ìw37SUn3 ͬHތ/'ƇwH[amXd9cZrG"D9IhY`9ϣfyus7@v4 oj[Jԛ-IllN!(tdEl~+p[@!ݔ2` L+@tBO8JQTgQL⿪H [){%mH"@ L|^_7{Xm^zk&wEtMM~w?]Is)ZR|^fs/RLDP{56h /nlk:X]>6IڝS7i/V7'Ou%wj۶AmT䆸*o@ ߚ%XJ;:%K˂zKiΗ?o4v&%ڷAxY ka) >5"?MeWfܨ\!sB̕6؀>5V>Rh#>ƃN˕z\>F] O9+Ijo*zf7nWVzGf3 +wO&DI6]U~~\VW绿|?Tc?u_jRBU*`B?Z?UUģ 7ņj~R]?LU(=ب6v#*L I9APU=nRp-{rY]?kỶtu>mZvt,қC\@Eڕ(Y,vIaު{!I%+T}hщ`j鍐_Mͦ(ՖF :Qo0b\ј 1=c>9ݯ|dRJ1+~}s=a0# (mVφ{@&7iCWlqα '4͝5:M\Md dc 6d Mrn\9"jAqG'cVd,@ r3x:#1ƻ햞bnF}DAP Ca?o5Ӵƍ޾tnfٮgRiԚ%fD7,{}˻Z |]5[jrHM]L{BSr\aS_鞦=Ftg[Klc4*w,S8tdxSBWmZix(})? )FK1Q5W#lm,G3l7hU3ewR(eI_Ƅgb܇Uh\'L#$@[e%* c6G]X xʛ4&-j|NB鋺3ֶ:Bd'V+cC}0xܔ|t48YnM>;gԍ Z^3. 1#Pp-V#X914ICE#+rk4T'aZVHVW1hRA*mğYH'WBX$i" Ld&j' fZ{-#忕Nd:X2YA93`jBihFX1(M=iQ9iZU.C!89a9X.kTFpr6z9E3Ltˌ n:3wa◕=1fca|k?ЌZvY{ W $*+{Nu)WB fc i6{OR;$+GG0xdNU4K|lK 1V95Ǥud65`hc?;~)#NuᙦxY^2oW͍'/^̃!8a>c+-ٯ'M2~8>-kإ-IèY@1pبruzt4<8zrpp|Á5K[V\g\!@HiX<R 㰬o&T%;Vj5j *I7?zY|^>v}o_=YgFcQ V޻ ݾ ֞ęmVדχݭ2$4eܞ4(qQaR+Cq0㴊e=ʀۿjVܴf>T nӮ󑗴vUW0݀@O˴w37f~\'ɇ .q?I1uT(4I~t`%.@BH}~uV*Vo2'ںA9"ZSw#;u1u돍϶w}q PD;d kL1HA*h:4Ä|CrX5^PKj%J kP2陑ƧeOk"Ѐ9d*S鬼^˞fmP fDtJޅDJ!2Kk8äE,֝.R1u75.8)'=']A(+}Vzkώfv~dH;,bnuIj|)pO!Oa!huR(Ra"#ϡ2SbZ9ԫǔ$y`Ї)򑹦IGJbSoǾ`֝x+c\cS(jKx tI8i]x/c.&_~.E4.Dt+!TBHh"P8yugz")HبGRv׾|GdC6J+Ɲ&>$2?ct"=2-O~3&P)c^is@ǘp,AzmHN[wnƟņ¢etE" {'#փLKw?IXX Qg1!&܊(ȫ1+Y357Vr,̌Vc8::[MBгvz2)k%⊨HEþ (T&<ifl&'ǃ6NݛQ5Fg0a hYvڌmPPO,[ݛ8hXI}Ώ&SӻOڨi Q:UZѧ)fڹ4}Yj|fz~05j- oFMj̰[5KvғpJ65Pxj)~'mt6'lpPw3mmVHȴͺz4RY럞}2{u|!ALn1ڧ:u1SI C pJSs<0=fP^mԹbx<@ī$ȕԅY&cR3!X,JRa e,%Ê^QAZen %' ADp88:3D.)w;t9 3;xu -Ǐx;i92- !MJI I0I4&촵c%3|@^(.Vеr~'@Jb|kS *SwuTýq1Ra0㳕"&'w!^syo|ܤ<5ȴ^z}=ߢ$ g)nvܿk*h҉}xZ[l:\GMz˱rJ FTVdd!l4r*OMY'ӃD7]koc7s+~J8s@ i VVIw[wxte%m $h1蔸Un*ҚXs׌J VơPօ£Urf7w'ۏO?Ol8y3hJ20"qAIhJU8&TeTrS`Pť0B bAfhe%Ð=(aQFdRyɝAQTBێh(ωLd1t风Ry\c(C)MټO`"1ȹeuQRɀ1XT SI,7>LB*] hE{є!/i#vBx4k5ꗱnY1M_j}^jĪa#G ^kO"mVAr`yp1e V*%0A(@d B`Aqqyͣ[5*J$):_:p0}qiSu[%E(ŪwҦ31)Hۃ KhN +iQVJ#B)^<^<}Xluia}x Tv,^{^l' ~ơ06q Jя:Vng=íP_<3}S`5W#ѻ.{7,CV_7eKbfzI GρwzdT~%b9@gg( = We 8҂JFB9biB2Q43:E8ŀƥou($gO|?wzoKf&+0B)ɎƁLlCl؋1.@}lPI\Y  B eicHH!5Φ= uXHBYt:ZT:8N:P_9'qwahuo?Ի w۳8ȭwt~>7}F/9wp?IcAj}n/7ۊJ"-'ҒAQp|mB U2>gӴzm@1z NUh8񈄘GL䜔 _cX%_Tkm߸`ػ_>u﫟߾ЋycFjn7}xOq:2k%|H|PM *$pI HRO3R! ӑꔔ(A3n%.VJ\ZK+qi%.hVYK+qi%.ĥVJ\ZK+qi_K+qi%.ĥVJ\ZK֮ĥVҪ+qi%.ĥJ\ZK+qi%.ĥM#cBJ\Z+qi%.ĥVJ\ZK+qi%.=FdQ%.\VJ\ZK+qiEG t$ VdTK+qi%.ĥVu5CZ(0.Q[iIh8O \ H+MipITX-T'o u}p9`-JFGτV()58ZL32zybAsC@ fV,`/WYN=ip4} x~*n1lnBׁz 8ߎ{lmvF]@tF2ilh XE[]()BS)#9eF{$zfQ9܏sVEyJHU7,Eg;fMX-fbwXl4}2r\O m𰍽(x&-\(~($NHoY EX RFg]48>ܥƙ) O^Wqr-C)ߚҟToGhƸ*O@Q+~׃YF HJ w6sXmCmi]="f# ">}GELpAb~t_i/4g%AD{"-XӌpŜS2j}*M I4 L$Hf(6F@dP! sgՎ1eans`H(TXsַҦbgrJsKG~:.s vb3KfØxQ>@ $H\N2f8Jbvx.tT4 ,GVZ!+A*)7!Օ(0ۜ{h x:"AS*4m\PSPzy?<-HrJ{!X12jh &j4dh1 J Ƹ=t]eg^U[^pX+t.gm)buL3 F Qk'KДFL+F DRT0l5qj^[[O3vaw Bz% iqem gŸ6v' D$QGw5qx`gvv|q9#;'+~~+y%cPg -AӸ7ƟV(6%\j}8V{7w(<Nk=:# u怼w/:&9[u [[&bˌW78hYyԥ\؇cRT`-˭ZͭKf4w7|gv3M~O? >a:ec/W42GEeaT4oQMzKF`Mo zy$k!$s5"E&O 4N8.:fYWQ>(o'e (HshRyl rIolߧ3 xJАG0PIq5ߍ3ttqO#]u/ؖN$(`}>YmbUp*2Ǹh'<$zvC!z@Ks'Wwڨʇ.A{pjRrMcR:]Kp A{4 %Rb[^GM fFƭF]T'&JQ*d*yb Kp<.7 ǓxG|?iv+u<8B5߯,G*Mɷܛ/IZr<ˑ6bGlKy<2C9II O%W)`+)] oG+9žqb{`AGijD2|Ȗ߯zCR)HmM4kz2D<4l%3xC-@rR!2Dy%Y Bu,#4*T4#h#AZ FCL㲩.AVǢbQ%(GB)ޅ^w{?<*L*PZpydrOF04 TvOTW|u28ǡJ}D $̄`H4w!n3^T,Ux`GpTLhܼXì(F#s1c/zŻ !P첯gpƵ?BrHV=QYHG'dt**"k:ٶ"v%Rv2L Ay~qtK"4raz  Bt֩lG# ꍇ p J84\9T5:xEF+7pMᑺJU dm,_5SYv +tmkDJZ]3SW\6k֧ Rg9MIܺ[Xg1+gF)sm{y6R-;p+?dW L~GY^Z'b}\ O}Kk644'6HԾ'.;OcxjP90;Oqm*rrCg?EEntZq@wyQ(P}8wE+*Dv!^CN8ob>t^  [Ç}d/";3YQ~_nscv.M:˧Ag5}ǕwRMJ7E<4͖h-Ott5Gp*752$qB HD&TTLvh첃5rzIKlKfQ:fRH՝oNq@gKdn8qs5#T}d8=Β1{D,Yg~=lM؃1;#:] e*gD(EfcD_~7~y߲_~['hVNdv/fşo 1Fk e@ i +]|o;8=%ʝc^زgtaFmJnSY4d} Z tx%s|jM_IJ7Y,={@=ΖKLH)(@VqZ$HCmJ&)9"upVpi] T8[>!߄XT4Q]>AhVnD"szgAƼO0YLڈuxlT*Vlp7qRI.P3o` Jp "1\ K2Rkh\ mq ʐu򚓜 pB#DZ'/S \mmy$MnnZe ښ[HBRS-f#q_ؿmCbn+(S\ɔ`6@_Cfz -Ŷa$  a)% 6z}ic1rbΡўCcMwqpAJnF "(D* XDgMjF]I1Q> %at2q=C~}T;N?κvSdǽ9~/og/SfpC%bW`ĽI}sL醙֭gSwTV5*xv)KݼgA"6?A]]K1-7&mFjkh~hEZ})l2lWq7c) bec Ĭ-N[C|vDО*#DoA("k,MNzlC\Jz>mm\ZƔTlpQ렂GxíIf3K&G6;ig_g\_;|<\UН/НJm=6ܯMp<}FԎ% esW BEi%xYyߥ|mwMF-.Kr+nj+bAE&!)DZ%@dI:.MEC3̘`$0 GfFJ3.>" qJF *Z,N6cvA繆n!9_ܰ΄*QWS1SUATU-bvzT /W >guǮYǯ Ap _8nrA]:ƠY’@K)4; @@l9[WylQ1@62{:E8TtyeϞ>#AtA)eSgH-u,ĭ^gkYe96!Y:0e΢R1T{+).2DbUmMԫ6Kenfl/cJۮb #Z |MdhRZzJH"O$^J* )R뮴|GYɜY\@܉3e7?EAVP.Z 5Z A!D*yesX4\m_SSCo[#g=:tX+Qt4}ٮ<_뮯;/艂6GxϹ`|˃{p9!7͙+:='xq eǡ(IЄTC\ !3[1z Բ׼JeŮ@Q^NnR+VQ{|wlr&4Y{sT7ɰ7(~U:eRW/)]m\B;Kdd8#5#n=U ʿ竟x)5K˘t,qQs@ǽ :7RYgG.`fu<)k)&J'C%W84Cs9$*'Fb=m!aBMĿb¹^USqkoM:n}OC7:Å)~JP9syF6r\{ܟ:94IO4MK\,M޵q$_!Jŀv=;Xc0zbadlR+{6.^9 ꪨȈs"dtE 2?^uӈG^]p^F鷝H?9j?!?h6uWftIJ8Ԕ;~jׯVZ"k08H _c<[ C`3wgӑowQJMJxuqB96d(umRse9v^}*z-?SRD=! 7֗^-/M_ʖ/F6&_4g޻#V~x7ǖ =Ȟޤ &+ɫhɬ,vtE&mUcm|MRص]v⛳V']m𤶏sUŝ"*hsYG^䘬V첕IXmJYm v >?#^odhZԘDT 6e)JՊjUEu<_}c[ZE;7X)7kA ZZj Nms-(iVԝ)mv&MtבjԻfTm4'3 76iRLj1 pT|6Ʈf`nhf :#M1E{) w#Lţ%Dw,C똕ǻ; ۽mNVh4IU:G%%s*DWBȥ1hKރ%vwxgg@+Z'AV/D 0V'*gA8ORiW_YRXK'=u(s.lbqMO:!Ss.=nN0E8USI{k%hH%%Ɋ] sr=ʜ%xnku}d s{\RwLMZ%1įZ)|!4emG;sX'餞5/]FT,hCD ɞ5)0rS҈VKuXe]!(dGoOMw_KͲ#oGDf#_f| D J<`rNc-OXUE6D lxZC:mqLH$[@yU벫ԔxX@h4yֲtX\c ]Sb uh@ Mk05Q$F5`k˺ Cr`^X4:dZ{"X*P*B&i, eoo0'Uʔ/b ~Nc; q [ZwUTiRS`Q1 ]MhHpu28pqNcB`P&׀T߄L 𷒁2UW@,TB24V@yn vEm%˲f@57:\?A0n@Ơ) u2ڀb(4V[ڜA[@6N:#]7cEfJW5g锣>@∛u0vD )_J&:&|K@Z܊NÞ7ud&MGdUmj(hlDC0GRvr_ \Y[Hl NjKtk&zȮ\F*ͤZ8JJd5 ՜α ^cxl>ukkᗢHn"#Ƙ͋BIJ!ĄmB0|n}Y.(}tTLtdݻCh|pL !'ic2 ЃVsF<@$9d^U0@6!+2]6xѼq= %$DGeڧ2@P#@܎m}G7 VgES Y_~^jqgE3#md-ŀb$Sƪ`|}]Uqɸ ȓd}E 2FXkp`DPHc ޣ.O9w^s@ /сwK |CGzvZ% L)ڽXRr,a3m>%0ZuIBvp ` (-f)c &$˽e;VA;(*GjOl: EP4 <(@6[ õ Ƃy0+ !.}p$,RH'J sI|T!eќ0Mh%hd6Vӥ*^`~u$^c-A6@*tA6 |+%La&lmRj6%UZ>w%;>J/H d=+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@_Vėƿ%̵/G kZ4VF>T;8J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%З">_ڿ%+V}J lg@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+ %ЧKC4.ǃWߟRSzqy}]\^PxZ]usE ፎck٧(9XG:]Ϝd\7k>Yү_]ۍK ,n>+[eX..J,i gGQ·/󟟼LkBnu_O j\`)hYt'_,9]Q@֢[3Ki((E/GmGê`1 ?_Ϗ^/d}Zi.Kw`Rtqhp6VGΑ{=emC8F)`&# L+zڄ:-[c5\YO^8a闱i >:ŰRޮ0t0ȚܛTj@EzKu"~RbJAW}y7'\}{;X$`ѥ\r]Od PѺ~|}Ex+w;vXzbd^XIUT b(ZZ'eE>X+c̗RQX竹\r ꗵ(yrEx=V9brQyoƋ(]vy''H^FWX]ҳw>xos-o_ʫqg|Ӈt0]E.v.yPK)vTo oronu[𛍨q@^}AZRMI`C悳shnDhAKszkGY#ny;;>ͯ}[Q#"kgLfJ9\> Mϑ)$`tmy5,P{q9ȶGHhSQGrQ):g*%u2M.eL Qwȼ[w܊RMA( u 4L8,j0;,Cbu>%([B^R=zs" ւ'=XtܴftԌtB,#.NqKfO/#IXq @&Qd&Cb6^ n )?*]`t\Mhܽ8QN5 Aozh_v[ (]%w'q);ZƷ|XsXbRBd8з 9Uzpկ| lu^\C3#N(w9qYjv=1,bw}q>w;1xt=VyvjPc7Pm64hs^o^ߠÙ,;J{ \+U)kîNm,Zњݚ]c [B>\ ʅ5n[]p{:v4tni7z^ ozu}v̛29p>aX& yi08*9gN`&N 4f]Na.E\mllln<@ޭ`k_FIpVYH'HbV(N3a|f:MWӏ4_}zIcއ7m}}YxV2+jU@\-2!L f DIO $ks|+Y]Wh K lG=SJh˜J0Sq8Dz>1d*ͦ{4y=tpz]¢q%Å3BK)"Do3``[2Zec^~}=vQ:Mԁqs&O9L;x@@DZ09{r=5fJޟ|EO&B%4aq%C)Y>BI,s0B!5mtZd) ~ + 'kapjQf%RoBŜ2',p6t A.:9 3eK AXrV46`r%$E}'3;l4cf|c͌q70 ]9; Vwz֍-] mK\6RNڬt6lm^@ 1,~=f~Eo+Nr]W=+gṘTij#]-9.F@Cֲ(9MDfhB\,a#D}fjUzetG@d0NYH`8ɼKжq8.iyn!#퍺DbM=e3? q@/F׃֛7@>^VLS{.Ҩ箟j-lKPl*vKZ$Y)$>5;dn3;TFF i]TE鳴.f~w}&U F˧.BGH:km"OSBaDB@"+Ӷ8g-$JLjN\LKZEC!W\h+KnjiKeL l (e< & Xã4&èSSzO".B"9q4) Sflr3W/Fܰwɧ;@LN64Q|t*Ĕ0ZhR3!G72;Aok1y>r_ltl/z28^,Ev<ck~e+(9&%ᘶgAk 3YFIT\콘9MnJYDw(g݉əH2Ep^@ J%ͨ!dM UD̒<$l4E1"kIAcAUY !#zj1!35aS8cD3.ĺѬN5XW?ٟ,psPyc1b*F5!9\>]P;CWwU:&7%J\N 0*JS[b1*p9b NPagf`t<匽nt*fŚ1V~M0tšqb(3p4>QCRi2T\#h7u:&4+6 (j$e%͟Bٜ. D#|pԜ Ngw Xs:XvOM"vñ}W19Ǿk#|9֩ZZ7]er~j‡KS5e}|2[v˽^(*w\5r ׽^xޥl+\ )%&&f.Y!B@0 S9&!qzuHƱ`cT J#cr :o|YFh{TIp;X 4fjWRSL(D rl唐dԜ^9 h & *AT%ce˄FpNP!e(E/z"\CL$]Ԃ' @ @#!2v(՜e HoE~-m|@Z}a^lpښ} Cӗps^.J։QسWbGȍ>; C{>!F_$ܷ A<ΫoȕW̑iH9wF!G al mW3Qݽ WbYGu g#hmTu: EF ?]B.rxw|"s,wB\ vJmsfryvm4_]ԯkZ~1oOzL땘.d,q{_5H'DB<|Q%ne"VT OL3Ub2j֧g]ou='\et \FԒJJD#w&<RR$@Bƀ)\ z(qofzI~s3|f8w}[>;43?=?z :TQFK>:2-cs4v3DhΆ Uq_͌=CEzrht#Gl&y$M*$-%PNEm/u:'Nl>+7gg+zSz0c4tPr՗2uSEOŨt<6}AK22;'2Y fT ʘ"y%A9Waeyr^QjQ88zBQ!\P" A:][oqߴ2myJΐhrk9؁%WӤ li>L“dA#H`kzJ D>=9+~=#23/B=#[CI0H%PH-#$Fk]DփʁjBdn(cA앑WGfon$ZIe,2C ykʉhm!7ت;X}bO-ZlWQ8d2Tv"E5{X0,ėJx^MUj6_7FmRdO]ªDUF\[{3 9|6{%0)5O:cm\@-~4yu>9dzd|2&)Tvxݔ)ӘF1&S* +} ?r>_\~ry7gF-瓵-E>& l j2uJt2AvgٕqS]l!wt^@Xn@(sDŽhaD q=vand45[m5W;j{NYۏ?m:rҭ+~7g7VTc3 Vפ5-њmGƠJ:U&U Jc.gj>eV?J*[̊3̊77gjҀ٠ySr?ѵC_,!hvMS;OO鈎C|K^ ͿBz<- 7zM/?KO|7Uw9 IBBp'\kʍU]hsKH )1({.lYW\@Bx帷R`@RK !@ Xsv] W۱1-hr*]D@ jDO)!b9bA 5t^4M/> F!]QHgr]$!#R4{"Bo&Q^񘲘sLu$S?;8nCgst02WrR*OF%%U uJp*<2& Yp5"';ݰ:'Q?zCLw3nF'e+)UIJe:R*8Քz>վ]y`fMq^ es\ Bv078> XgےЃOz]տ~Q'qB+ rf2 f_OC-禆kЇ+eBOR=}AfKo:r4|:XG̯B? q0ڼnLvlԫ+.ko9)5KU!PmePspW!t /ԙ=,eA@C&]Sk.qKgͬ1lArwEb5oNˎb*3)ˎY)( SN/E܄mTG0Y̒1= 5i9/O6_'?;:>Ϊ;i.]DX\1ׅQV_LD$ ҚE\㻜G딆>gH79 yp姜z\.^p|VXyTw}~5mƏ欼?_ w~QهWvgӻ??<4os|b2C.LL@o9gR xrF(N\+L ,UZ.@N B3{m%(Npi 6(->5@8Qi5m\aKzB M*!98c%gAW)'% * #i@*6]{ӆ9I0NSBj={Lj\`>p8 T$>}B`Sɐ IzAm0@ JVhcD e@51%0THv#(OH]koF+$;``L vRVc)۔d:$䭪8ilfZtKggk|VSB[5 * LZF )]p c{$\KC.LF#1UU'72&Qk@t+C`fkڱ0pf^_?t&lG\ M37 `"cP!0Dm| 8S@# Vgke ltb~\=ca; nFg^1iWeu~Rd %ZZGxyD1hbT>":|bmR6c EU▕X9P՚#ϱRmVw$W=A0՟aMׂ{7e>L 9|'P x qu~B e˜R Ntďxm=+SNl }f^C6"7m^Lȣ2@t=%D:2x{(\P]ru# 2S 3ieBXe#j cck@Ql|`aZS!Mš lgdGN 噴UAŒ5P|UhrJ!ɠ-]%OU8FlU&IqXY' qkh%c5?cɩb lG$B6j,EiZ59Y[C eh%m9 -FPHiX\I;;[NUgd x*WXhJL &&l`Ù)yn5K䴈^&PՀ\I1~hd\ˣV;@C>@B e "t,:\"(, 許Y3i;pĠT$UMɷ&:ʩllK@dj\AN[**dY){h*[%a7j,X \I{V1(l rABj2ZFS^vOJ"FMh[jDX\Պ!ѿ*`( n7m^~\ju1qWCjED)VX;KnCz \ZIyH! .p /룫#D%"d2k{'ScƪڜדbFP4F,pZnkTp:PDڤ pd.ƢΤI`&Ybm4  hh'3wm A x XRcF5Jt#BK6rҠDKhLT@=BB0J$Q5@>N( j=5ji^#úec4Bz=_a+@1\ƒr\ \1DN!Fɟt/V0 .TmBF8RŢ4xT##>j CW=kU4~ aQ0'#80`ci GjmRcMZ |,$˃>0t$ƀ?!],Y#9sP*mt Be_j;J*΃ Yk0m@ I,gfFZV2Z0`ԋb{ a4S"qFd>8O=zMfPO鐣,6 8c$@뎗4 LqcvNI}6DU ӥ7`)8&T,ҬlH"r KN]q-J#:KckT[];g^m#Oō)(Z+eL=ˠ&w gz}c=rHI}[zYHkaov{_վE_TѮyT!>MH/!)`QG syߕ@RrV MJoQ )H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@߮HCR`@"L s@R;I%@FJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%7r !){8J v:y@N)I *󷐪@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H U lQ5sy(J +@ߤHjqs@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H DJ R@"%)H Q}C}z+u~r~TJ^wiq9_] gޙ{6$ 2s\||f&Noe=f>.tp}N}h[ʳbXzYҫxQVیy7fHm٨~uq~ZHb\m!TMʼ{a^_{ndH\2,SV8njv~6AImPgW:̓Gd[00*x9վm MB)9#kS4_vd2n}*,j' o{)?k4S nu?ׁګ|1 i^}Nt@!7&mm:? lʦ*}ufYY_T**w^s0D[eo Gep?_,bn旧&Q俍 P^\yp짋_6fI&P6tk~@[ixo~Uw3/(/t3h~" e.fgrgTHĚVm!$XvєG 6Ie|V.+Ӝˁ.l6ǡ|%SuۜS.#7*j٤REē7:xt)ogHD:ϖp_ǐʹA'Ϻ8İ|%켴;zv}ѿ}\v|q=w|FL]mkݳ?u鴔}2Gw7uG궟u/f˲SÌ /wMfN$[vYWp0o0tLJ-,.+ym]Hs9['v8Srw0,cr 9ƻM n ru{ǯ=wl5=SqR5 [Hc\H?>g/tE8,,;^ώ˩7m6#)!j oچT5ҝ+ptvgx~7筥hfi.7S!mVUy,9E~'ۿ6۩*J%WOV[ˮvʐB-Sư]&cɿT 姿<ϿMן[^)5ݻX]J,o;hj>py׳_j7.Ic0n@m z+- H+k~0˓9rץʖ f})^Ƨ)ӏp6 Ș} ݇Я;Do뵄 1{0K7_+͡嚐%W]^.s M޺bYk!J=k~&w)ۏ-;lg\2Ѻٔ`f.N7ŵvA.p౤BSeKkw WK8rp˙1V9IsWC2ҽ҂yRhK0z[ϴhݛ7/55S4C{۫7l`JZ`.0.s+zn7 $ǫY[ͅ&wؽwNn#ކ),oIGE3XfDO_g[!u߳J`=SW\<]/Is5qX,++Oy{wj́N5˽_8}7}?ﹰov? 0j,B {V!^FIN{ _9ORg. =XyX 3=MCjdzLfSKjhb;}NrֻKJMNJؒp\BQ@NݹNs)ήƏ6zPM&l 8K'vƼ>}pY?ʇgVu;㺣;>^КL\sK:Ͽӛ*yHkn2O y#N*-Nfoyx96Ng9CDHES 4.$Y>w;FuZ>cz:MfK}X "w 9q0c2(ItF)k}%eʜq eg@{ $բxҿ^ cύ18]3eL],O}6ŀ`ld% i םyݙ?}yUhLM+j@_kf [h҆PjJAD 9o{'<2ιHNy ꐵOeK>L%׫h!X$RUa⻦UP` eHч8rK`%aj +jé,X=brA.FGzJIBKjS&G qvgzЦ8ΙUi)mI\nPe\h;qjaYH;N!C,at%l%Ҷzk7~!,FS|[L/dj:%, t,AQzh_v##ypgs0{Vo` /Q0Qtj@(kY36T W%~= ,Wp$!mg[5ChU?ν7B-4&l{ں'(soH}xݧ-Mk^Jnv7/]֕cSsj: ;kGЦ#R6 w]z-u_h/W~̷O©uC/ytwtjsGl;\z.nMﭞ׍Ay(-pX>/ 9kYx+\>qmNiL3d9_Z3T> RU6R<4Q`hM!ɛkb+2=)wO[<%S6ϓ (pJxw4Q#dBQfǀ!@ ZO΋4LZc8ƤҠsDǘ,'&(൑_}]m8we5'EdJ".NJ_LuA~)+)bId6\ ]x %&ʨ,W"fy%46t˯Ae=de6,lI2d%r ]^9Z R@Fx41NB91eEc:FfE4V뀒y'+Yg(M>.9AZl)SjD ($0c|ΑdZ[1b6&-2W T"ug,21$t)>1/.'J9X"P/HqXԪl'cD=tIJցśɔGW# 0f݂-UJSM&^H~`Ur+bhdÕ2e444Q#F%8\I1tȣUt̊[lr?=&"TcʷJtwLZR^sɷTrVϐ;ZcTQLRx:+0#;&!^Y;h-ŃQnbԤi^@6HCBlL&ZY,KyzoC fyh,Ӷ RFs& N.K:I0*ώ nKJ;!ށ t,zB4 hOjVp1p츁,uL I0 ]2Ԡ%V^L LQ ndGkdt"oK<ㄖ#wGw8tގNSul"ftHCse2I= g\y M()(Iv[Gy%.QӴdb!}\[ NEr Ŷd3b_]2y= no>5|3MIи`R`gA,;*4+8[H'qu.!0̀)w0Ok.;':+/A\(Kh#"DF3B]Hࡄ0Sa2WՆs7P\>/189߸|ؚJdR/uh"oVt1~&3UǸW]&A3!6k o4dΑѺx!@"X 1aNld323ʀn&4AER\sDep)z:J}aHSL"Hmn8k>''jH@eH)CiH)778CFBF7_Ja +h1}.zWvZ7_v [-]V>g 0xE}H+puhr<(E k|^ >]భPvO#$l_zߦInWAMȾW?7~> 8[vu-(-]T{ ]X|P0*6L|mnۛm i k5p:w-f<_kD.)7lI1Ax'8KZB\=AR&䢄qGnubBtai,!d*Q:eB!k`6pB7[+ 1Cܴ)ENm+ YHࣃ1AZOsmyi4(̔C58~s"RӚ7+-|f%jjyX2gS]몚z||>cJ aȼF"I0}TsLjBA01VDI% G8|2VR IMvGEo_>TB30 B">R`YL-4ri"gjgd+9䭝6&*3!+*oQPIC {:o8S}hGVh+ &%/@"h-R!|rեoW12#,3r#c6rF|JϹ8cW,TPuXW,\P6L$M«ʵ!/~sCiLˆXpRHObL+*L AFCXLpͥT!y\2hd 3 j-$& fh/#vHHX*" 9rEÈm#db jg]Q[gFmݡvnٜp@73 k: S) < (Bi#88N.A3 `J$@w"Q>rAܝ0F<)4cGDf;D\ -b. #)P ކ(@51!JVx*hKX{ P4j*^2XobAT ɀ #ֱܵrE[mCuf]qgEb;ӂRa픕4_dH(rqji* DClܱ+xc=@X b]꺚ܪơMEE\Dя D8ߛy7b<*,CأMb%پdR[EKjA./_OYu$9(+$>uyI9;8?wjkKfBۙ0OdY> 5WV=:^g0e,12 deJ$VFs,]oMO7J|[[;Ҳ̴gXcas1S[EԮO0X'sP(EՇuRb&y$,6jFz1ZAgAks𳑳@CIG'J`}}9ujp/?^Fm?{H{K.,G +]TbHwIB~yb$14u/@>vZc KATڀuVE#`xrIoʝzz:a9$aG+v\`C`$39D5 E#+Ohevh $w8Bs؉^V8*f8.84B8ҪBseS V ^'POőAvH6RC nõn'=91&5OP&=8L00Y9N9zh/9v֖lQK-9+R 38:*7F魞o jw5g]7ϬSH쌽TxT[Z!}F떊[Ƣc\gc`,J &V,CjPx V.Bs};>s d6o6:0_NGZ;zs{sI\MGO&.m֊xir\3Bo"`0!Ic0ry "!jv^ 0|9`Z`#Lʪ$!dJ D02@^ 7'o@,Y)*>ung`dapR"! wK9 Q @Of`FɚGΚrpL[zU&, Օb e > OOg羃!׆<& A S >*@ A(A6ƁPZ!!OIqg+;'Fg/LZdI"@P*c1R ˂2HJQi'٤IkaZmCR"곔ܖ>L ܘ3̤/$Mw e5vkWބ8HBF]bF\.Xi,@EЧuf.0S\t;*nˢoU3hVQO/´Gfu I"h9VHSBY$8 ;+wx2IDKſ_/PS l0;=7JPxO>@VKs'@*eܚ[3)LIPM Kzt}@XnA1*SDnX܌ ȧ ۄ4uRV{xi_(su{#m7ߕV] /}?oF\~#R|;"YuCjՐcr1laSHb|ua5SX%[ZU7Š#ŠWk܇rcAPUOJ0;wF7_ bŗk㚦:a<({=?}iu;EݴhMm|y9n@;3 |n5?~xtmT=0{79L LR۔i^fw=4(Y 턷1U&项>y'keɸODftޚ]5O8XfM+ybBRr=LcBe jox7ߛOjSBԛ3oO_'QH_4x2晚E띝_-tT5(MјSO|W~+0r u2_`Ls6_|wUȕh;fH̐|̼O>/,OtϘT'sG_K6UTñ/%;TL{ Rq,XZڒ-Jm|,gw1U;K .oSpu룔Y7 Bz:0$P7]z#}lB[bɿszxUɖT'{ݛ~ 0L߻aO]E?fC5bQːNccI6T u7ln( ]zVap@z'8ѴHj*ga+5cfŝQgj?fJŔF&vnN5QshL6oəTQ쁓cn:0V'.yl'T+@xoWK]Bx[Ka_ex`꒨ކٯiInۊV֎6#n#iKr Ʃ9Z#i_XcGͫ6Xmw;zd7qVOԌwW߿; kgNKwq̼qB5+5B^Yk0@Ef y;;CJ #AҖiUѾG-EѷQߚ(çK8t/GQߚ{QUEי~ۋJm#/_+GE5 aR() ^K#8^e{_qDqqnY+pˤ:;g9a,m6mFͽfxZ] XIw7SFd:%Suɸ!u?! K8X[;4lpF[_f 3?6 EU1#c>zyTFc Z3r!ͤz Qy69aQ~ao QfX/6N B:uia~#jn^M`[6`=Rd„{;𞩪pa׽jE/ u^&\#r OE9]wHp l6p됛D)'M[c`niA3Oz:_;^mŞl%%X實j=!9e+m B{1Ѽv]'\'ɹ0HYro|bI7?RT~ST痳k7{ݰylRo=JCnҫ4܍$WI}-eD"o`y9'1ڤxMؽ,lZ6Ae9YOSbv*)wq_mY|-$^[FeMVΌ|{QKK X4M,PR:Ѿn[iꈸud(Тg˼K&353Nf_ ⊋/t(<*"VJF4[6JvL?R%+i]WfZc9KcAF#RRPd*>IC_[ABf7b:]}iBW(dpVsuEI8l!*^DE'+@ˣ]M>)S˘r%2^kU !$$ϻ*LE Xg=Y{xOS YzlC[Ed5R+WDɻ"2ނH\^kCW6) j08Q" i_տ3թ\ٗqmXe u0 HcЂSХd$&$J2STB;r7q྽!55:wS8MXB=DI@<|Qe38/LV! YMVm͚5YZˌ1e wң XQJE#;`kйL1pnz$ ZX LDLҊզrt=ksdXJQh 6W4ڇ/'YYX.U@NKpcmF[?g ST|Z Rt9 +ƏI,1{l#.d\,VR*G*Tnvۢk ʽg-Z|wv=R&ČǹZu?5m-4-a4XZհ2 Mg_h3Bo4Q͌&&֭ rmVwg?]~4nZ}xUïӕWk r6ː@ 8b۴xqijBOoj[4oڈNG=_v|]׶1q@Xjw37ĝx>lV|}7?udPe^KSH B& \ E!m<߀%ϱ!f樾K,,E |lk(%#쪛-ebeNkU$ʠ1\D `. \HJ) }E2S\& GTṷ֝=]n?Ox,}NkN>#\I̎+Rv)_"Sbp>9d `'xR5S 7̲ ڀ{D;K,^ʒm b mV0zHT$B(Fx s |sqםd'_87ClJGCTW>4wOa~bjl2iАA'$R $Yo0[Q>.P)&=R)]('Q|H{_Lbu"BڹXd$BuκޅN)_:\l΁ϳG|:{>[@%7|z2s5٠QPcY&^a(krAsJUr]gO{T_GX|32c!GރJZ'*TLgT 7"^ '_{bm]|iϼdžz?} +5`.eB)=@5Oa2 JLuڷP<In{ ec]JMRGDA];%x#zr<e7+_/7wiwY>eJEfWW4*jFU &ݠ*C0B)ҡg=s<(aG{QL^Ia(B P-ˌAJ)_].԰9R< fT*:wv֝ZP }hlD#2!4"[O56'F6C\qoB|'ٯq8a]}~wE~ՠgN_Oqq@ܩ|T*pjo NۼcJ},"8MĂ w 2it1:2(:њtql?~e0Fl>-3v{En6sbE\nnwjh_ٯ-{@ckg+N8,mʅ[gkYz[ܿAt4J,zv?Q悸5UD"9 *g8F|_emO]EGo&T8\Q=Fւ.(1ǠJ$yݪC2MPp@A/InsY)Iz!;3L8" :`q*XDTHX~`HZ  ȲC%D!eY 9zbKBe+.g Y c9;ER.cSfxQ!2uY*R)׾sz'1 t/]*"Yj3 A2IKbTcGCΠV^^5u30]6PSM&, >DȆU' o7;GoAvZGӒ-"{J0D0[:U+d6]4W4̍Ef Ivd)QݐK‹ f-c%ez @@֝8'۝Pow@69 :j~rVOL"ʒJ{$ɲPSm;M4}mІ-՛$l"Ed$eS6UP6 f+W2OIfkN-zR3Q3ۂ%ݷ*DEeST,-E=JϻyF׋ jIgxrA Q jdhkȞ~ddhdZLp )v. T6";,%ÂڣdsQ`H E?ET3kT)K .t : #}C'f;N.D>2kʸ {@=@>D8=:z"tDGEDJ0$lBF/:,dz^5?{۸ a03cv, 觭,)Ǝ$%KiRk,{9@2nVU"_b8tb%?4cXp`%\3yxM~Q Mvj2 l\p\*n9uy?'L;X,'#54?LIًf@p:s%|zx_,MMv+~h.+90Mn, <3<W^+V{ٖī7X@JF/tJÃ&}Wf@VokqВ{ iE JG0?nD[j1oƢNiZZ,iؒVNAfe]l{cc\?㪉JEFHt~ˑ܀"5ױ EK%(%FTht\cT3ɽv9csX 'Dk| QaSFP`ګGuДG Em:" \Dd_rbrX+?"S@b탕m8Ձ.$+9KgqUe|so1J}n@ַ1<ࣇ1*$O[ -g|z̨X*VP':PN~i5WĢT[>|PS7<*v9ӆ=XDJVXg]>oy5I糛d%Jh|ר"=%}|R-H#U*osQC {s9S hGVh+ &%/HJFh-J.LR>RL@H%bdFPcL![2f1VW "e* UfYzYxPYQsBq:L$M(iPhr=~J_Xb3P%)<2ғSJE(ϋH&R[Ajb w MX5ʥjgAKNGNAw&Zy*30 ͮt_z6Q032 ,12KdeJ$VFs,Ve^;u<]w@vylXLlגWJN1p"*֦Ȥ:u4X*FZuV>QÐ0!S&a5ʨT)6: Gm6-4=[gFATSbqqu+֕u;{H߸zC.,G +]TbpwI\~yb$14u\ ;-Q1䄥 ^*m@;@H"ƑrP J9QI62H˔:gcblqiǵ 6@P 9C!hs)hb=Y@M7BDGW4"8*f8U.!wiUS!V¾ǂFPÿ,/YJP˺HtQ$8.^IGpߟkzisgz3?C{+{ -]z"=D}NGsoY}"z’wl=z&Hŵb+ݻK16zL%_&ϔ^2A0 r(8ʝ7\ LŘ8P=x:FY^1Tk%i=S{.Ž]'{OZ{0>w4%f_?.*b3A=9o=\˕ghl/E+6bgJ$Dg| j+A&Hd6=ӑGXjY'0ӯ#(7!+:k9GɕHj&,C+3.Rj<6bzy8O$7Wp}H6Epg{.ggy֊4uV9M] Bo"`0!Icd>z7z /'H+Sࡲ*_ Zx%kk&4j[KiJb2DO(YX{))nb΂wD0'`6-5_3n>UH]<Ϊyԕdx#e&5\d ^QIx S"J zJ*3,[Hp0-#opE:P/xl~{)zz$ 4UT7U T jPD^d< 7Jtq؋;c̒s=pfn@Rpa$X:c² Ҩ }dT{s}6?cbYסEn?rԓ@Qnnh 4LAfK9xb.FYMܧ0M&',#=Ns \;K*DP*;@oȁ$7{o; q9ʅƻccFT0Xi,@EPbT*B:-FwO.O&x7< 8]Y5'䒂ELJ+ §@t"9U,QHϤG(49-o~O-q9/>S7؃ ʖ9~:45cT+<Vh@BU/cmRcR4O qaeR{.?5", B1*SDnIŒqا ۸8 MV۾`?,\N̨=JlfݾqjW` Fdm RMO[e+T ^מhytuV]vFgrmyS\v%˭䖹}JZ{JǓ܊id15NBl8rS?T"ÃM`a*MBʭ]UVZ?;4Wp3]t6jY~mx!9a:m8y?Rc1ocڭLTSa+#sqº'N^ŊR[Bv+J㰫z8QMa/ӘPaܩhp)&ZHV DS#'1Ts$< &;#"-HAZF'ѽ$x^Vayz, 7z7:ߝH>8g12N9c6nA15@p0&|qv 8`Au]䀞,#ɒ}'+1ɞP>( fBǿ}wWR\,:o϶>)V}OuZ}[5PO7cd\ƚO<}Ov?nZ/CXO5s^?Y?BW :X KX8d9M{/WH4}{Q9j՞գ5mkaVc볟"UH" |]TٝTNC&.?q{hmg9`*Z^ؓݟf{bC݆_}8م/቟eBBÔ]ײpq=SfܨR!aB,ƶXhͼT wc+G3~G'E5 aZ{Ԟ,_}U훍޼ko~yFMZ#>Wid 0j4Ry6RMK3]/NNo%,X mo۟gQPWgd$Hsx\w%9XIp2%6ln{*T7 ŒFP_d> קu/ X2Ao~r=-w%mY4 .vZWwfx&0WĉD*<ˋyJ&)jI@lb]]^c@~HE}ae0_iyr^5_a}қnԥ0bQ|{zGd3 ? Hj&B5XsP[a^l}αI<,Ĝ}e +y6ac .ѩʴQquQWG0Q:w_s!^qG[!'CVݕ : 7+Zfph7'D\4I_lPFtEX1J~Y?kK<{ծ?-A-{v0lGIbREr/^1{w/_yWz}<'u67,jKϚ2SkMZ}s&UbprDu\~药U?EܣFt癖RLq`0`+R?;Njft|;hDnVYyB=ԣ\c84/{F6Sn;]L[ne% ̧,&#fSĬ"N*@˽K&{2&lN9 y' زjo.D"CE]Wd/%dFaA˺2hA[g5Rg,G ޓe &0xosZ9$˿S/:!1f<2*{;hc4Y$aKc:AjW132 csHy;x tҜ?wz6[s )vg?MMg_Ly'Ԓ}Ky@- bLAG Naϯ=Zٻ荷 *g[rS*)O4&,-e2JhU>6Ԙ-jZVOWwyFO_,}~wN|{ܞ'o_=SO4JZM]]yXg.]Z!W?M>>{mqч5r?%+q~'4#m0ìM= Ojڛ7-iZlҮjsfªm)-a@K˱^fg;$o7Γ!/E=6Yd[d zRu$xm=; R7Ulq,јJBHV8Yo&hÔ8ܣuڹ9?},mbۉs|'9\rB[%`ɳoȄ٫/߄%Obtl+8!dd &gCTJH^hRnV2_ݥȉ,M0RCZ4H6+Y*#+%NYr :өX N[\/r]п`q/ݑ'"]mt_ʂm.bT9 myWg-phtͭDk,ߞP/faeZ؀`}pE7.1mĜ2l-Lj筯(;=RqT_8 Uf{}!CH*d0'LB4I-фELf300%)B}@>Aޞtbk]Xd|6쐶}K4;l&Pe}N:b --P턕8!,|rjx@ytD$u0 69}m6nb *h$T90l;u[ݹ]⬯@8o2:]Z1*JT)]OKo?EMbfS\nZwv~V=xռju=UOhceӆr[gnzbi·?/ve=&n;4=Li{"Mƺ5m$ͱ}l!dO668'Cgs\?~hRCԝCze Ggb)6 t0A'UȎI0FGC)Bbt a<5jR:{mdcqnx*W+%j#@ҟelkݹ2졇[DK/¶H>ń 'Ԝ+* C%%L !Dh݊(C lN.РlT&jAD uki8 i:TJEUga[ K4Z2&LD5Ϣ2Î_" u8Cֹ ?$Ȍ4}B a"=vG% 1tV(vU2r`Z |\Kd4BEt>JŅ,E-r@g!=sL27+kcqN]J7y#e?&;7|+:|>9lW_0S b:Dxiz : J@A5~􃏋W_4M~TIIhGU?Q iSеyD,}gwFP>G@MVpmr 4&I5ꇲ̟oK4+[ʻivg̭hi^μJ7ӯµP>=if0nTAe~3 y$zu5!,2 YSܸ#\8^bk̻Z4$yo+exN(KtQ DLN/H"Nwmm$4b/zN23Ƀ*qDTw")5IEm/էΩsɮ@gYdhFL{`Rr3hVTDHP2 % 8\uՑ4+u⚕TgE~7 rpih|7jM}'>.vܔBY!-N,<(iE^~yMm|ﶃm,kb1bZs-3-?՞)}TN% L(Z*E)-1`e:RDO|!K%.f{(s4npBIf·ΰHH4WYptG Em5, GGkZ L/1kBkF qж7+_sFT @=GG1_b >rynkcUUOzVz;9v#,dVsUjM,Jf|sUOfR$5/՗#zjEp3V2ϝA"i>*9Pq{D]W^@krz了!V$A RyΙ2* X%H3aXJI7<90$5AZX Z(_>50 D|P@S[jv՝=ҋ^Ly7UBlrrHэoN2~zW𥍡D{J8S¾hGVh+ &ui# ^p%4@"})n`P>RLzKoW12#+XP1g&HFlٍJt$P,TPuXxT,\(&1 Ug_fq!|| J3Ö@XpRHObL+R a R"KmE̱dT`g\:%[g6)9qf\.vT2("ybRA٤P<$GevȖD_fp]{Ua>mWXQTގ{8uޏ/( RXޏӰ~>[;gOI/ur>lǍw~r\^.hiT&4N-9F'Y`Ze:CVR& 1. X&HSE57V"FMr鄷sH%ξdT{k2m¼ R*L%UO{m'R %FTX1VLIhш>k'thau­K֙ZϢE=(01S[ET?R0X's ˂REHjG:ʡRb&yl3QJ5XJp-aq6lS{X驄W7쥞 @5*frp.N6]?b])Y􍛦rEJL5Ya%bq́l(Me&&OƨrRgG/6 $UH9^ywۨr$fS'HT,4Y & B) EzQ9CB*,Sp2Z#źrzhr$2pb+ ~ӓy}nZ8*f8.X!wiUI!VºǂFPѿ,/YJP*dQ8 !^IGpׯ'Zp6tLg{!ÜVHެr &O'd;kO(򥖜N)bxЄy,\zg7Ezrw9wDߐ. +K޲v(uJőL~Rt SK/@v5^B5-U&]QO~sQw& b-/3f4qŇ+7Sָ"%-MҭwV HݲeZ43RGc0,h"jK-c1 %r8X*SK.-*O9">J ZGK!@I3jg>sWL# 4)Pamؾ&s?ִ܏Y%Vt\;*DU 11>rJo|\d[T1^FFAmΔVI JF"]4V=-`zUc?{=rr (|\وf2$;ࠌ.eF"3j:-$ꅻ@jQzc8-yK f ltr5Fs^/^ZFѯӼq_[*/MUkfo`C1PPD,&D I` fP<Ϝ C"$Y n쀡6 8^ &`  [^Kirb2DO(YX{))09)wK9 Q П;{ĺZo;xlpLvL]EjԕL=e-/5mymH:{x)\C%=%UiQ}$8jZXS<8"A<3/}=![C@T7Q T jPDy O _ll1v8z/~3i]A hZ$@A) 3H), ) GFݬ6O/&}5YשE˜ݗ  ԓHQnn`ww.LNfK9@P V#4 1P4 2.jxxϜۤpR Ÿ7:Lgh|<\RHhՕ3:Mureb0{-ŏdg"54"N#=spj,EM}.xֻ=wNP䗱R0a/=lb\Xyo]!=tt!,7k)"siF~X?aȧ7qKnVs}*iߚas9}#*07'{5m^|nyӝmmKl{E*@X߻9֐u1hkRlfنf+'6-sЃt7JG'́k$'$ O~8wÛYO*9S%) 3^ bv<=aG>L(;^bv5T~c7'Os7óVy-Oùd,}hW ..>$oT/ M ߡT*Rfߕ .UdZJov_A~~ib-O0NΤ-gZ@˩h~~0)9#Br^/ tH%ș88A9 ڭe4pPL *;rKY..Y탟?=[3xܷd%A+-r~Wwߺ^`'Ym{F:] o9+N02lvv2N0`6#Nv1jIղ"c%arwdWWsHBH3N-߿I\1ۍ4/frpo'𴅏0ƂI1_S却6?]lųa^͡'L |4Tq2dLP j2AM&5HF5&dŐ j2AM&dLP j5&dLP j2AM&5&|)5}>y_"Ks}1; ԶXl 2Af6 3dfl 2Af6 3dfl f`Y'(><>nQB3;8{@#cɬ7%sF(S'2cal; E|2y=dz= =~L1#j9r*/jsHl4ŭIjA "렜`PLPB`J@Qw Qrh؄y$t Fe)A@2g5cYsv<*Цb/^n}Zw%,.lB웫H+vK֢]WY0#tdRf ٙd$UT#1tESG,-ȧNKcT` 9a3ࣗJMBBX1J9Q|T#\&,,$,hE:kM!0$39AjFʃ/2Z#Ų<}0; =^ +(ጣ"`Oλ` H@U5 1=5 :)vG5&GtSv6AqS&Iw\[5rmqd#98Buk&MTL ޕUs32vA%2S2|%gS4ag ]Ugp .x\ ?Cwonyyh.@T vY4a& ^ `vd{-aUD}yGyܴ/ܦ",Wm(.I" )ThW7 \96:x7^ZEjpyb^E ݟ?VV{/;S]ńZe@ hvZJ3z$HT:&)R9▱';T<%Xdrbqb#V1ׁZ'֗:}/g :Qq B($S1f4N̞2c:MbaJ3κ'jq_j=m_>i^:tqP޼VQ+|z`>ԃ+c3im\Bq.t"^rÜ%Q [ZcI"!tuẽvNA; hۧ!!4Npdf0ǞrE%1 R1G EέRȰ, A[R.k%'ƮO*t֜׵WJ[_|~pOwGh 5 `6/<тp>i23va1 am*rb tx3E 8iH~4 \udiʳ湕^՗*f sFs3Zs@ŋW?nU Ui> xv=kA^q+^$)߃R]/4~AfAb5hFyF3GaE], a:Rjע2v6;˜ϊ#{G!,$k׽ WAɔ<%`H*ťʖjjso4jHlKN$A|]4hD*İ:C2+,  n.ry!-8ꑲneȫs,cC7ʻr4 V B*`xfc &4v$llt>Fj7ݞvI"L! @QAsKۨQp'%-qW pNe-}G`;o+q #*]W9h4r$:tLKf Jn-C~= mra/S6((J x*ZFLq*Ɗ*=7@`B<UX-zg՘ be4zl"Bj4BZ5o@g Ϗb* 0i砵&%eSjMJJ=`~ B[7 GO׳ E`bԇE''x6OƮYx|.1|8-NUr(p*kK"+ @&%'9|TU>ivޫߣm(I8-Rbi^Ť(`NM&|>*O};TB5 Pz˾~> 3}Kzо~tunIM3M+Fϼ;m1G(ֈ^Qrٿ p|K"4rjutW/Q<$bԯI#~@ۑ,O Ցڍ:!hNFG'nᑾqHwPp>:%[U c+ڥwxXk֦) SʑIfD)u4,;D~4 eh[ (buﭒw )y!rkޝ6d>wwcYwlvwt!xy}1_d zUT k&E>`BZGc(dR/f05RLari ?,&! >Qbģ'Gke=?W=J-9:*NkCp<=rYg]MZؗsPl+3m7 _ :Z",xjIz22.<gfld\ZU;~ ]>dU0ՑV+O s}d7 EbaD/&u*10utM` `첨d0_~?zA"D0u*wO׋ԻI65Zv/}V[f*7Ie=@%؉0!9$\ˏ=0K]5 H hPREN75CD.?4뻳ʺbzӅ:HY?GX?~>Z|}5/2cONv\jmZU>ОDLw z.O?ᗟzS1MYJ~Z*Y[7zWre\ m +}7@=ԜJ}Zؒ'BL1*ۦ**h]0.) } 0M_IS>iOi%qbbt"U+l- quJ91ĥ3QKy3OvwiRn+bOW!)%XVS `V)D 'RR%NBZLR$-|e…Qm&ذp* XEdJB@#EDM7{,N:V-W9aU12d5N%]+(& ,R`#^֥2fDDx*M΂rvrpV9ܲ7i#/mCnK)YT(FC r/!+>pFՖob01HD# S0`K`Kf< >QC!4ˑB١ѝCcM3 JgoaˏMGOu-/l6/VkۻRٲK'$Phd$Uam8Kg"'p1 鱺P)*`NZԷ ]tDˋ/ês=0 .%Ljkbn/7`CǪ.L[hݪk)=-~}x>Oݪ&vOn>{$_+~Wt"ںE{t˭ϻ]U 5USZNz> vw-m^XJb? ѱ,o?VҘ;ӽIa3voN‼Ac SXD%6#oAz3BZ #FphKYIϿBx8!BS̜2/*`[ FjHf% ٻ6r%W|NXd X` v~`G1đ=Orq[ˎd+2eN#H"[j>XU죠1!&3=wkokh&n; W͕@0 ~_};σs:;CZ :7_̟5bDTtDrb!dbByL}S&3W[ & BlkUsE*%#rO6w)k n(jaա^Uz삦'jplSߎr*W2{+v9_"CNP"«bA(Q\d\߮O˺6=8]tceדgeagfYYK`^J-lEI˔dn(p"UAڣ+玨|;"lI\c,;s)˴ Yדvdps/?%>]i3xKO& rŸ*^PM,r@R@TlB%{J0AM,eRh VF'=0ZIOf`#h$vT$BYsVZD,Q=jMw1Ɵl?u5abUN [>K #Nkվ;#\Bn H6IuY' EK~!!}آh'fnƟ2@vXm^ ʐw2,:bz`ɬ9! l:kR6A4%GJKi%b`俩uf͹VzL?ل IYT{pr 7Y+Z ="M=R(1rdR#\P0hn3]x6Gc=Cv`p t:,") ,_0AZ IeEKg!DM,ee6^ 2Z ^Bd+:c")jg͚_ݏZ|c9c)=%,)#Lє#PXJI,9 Ld*Z/LFЃ TOEz; ⥈hAeBa,R䏄2KKf_D VA4;Ozz&kx'$hdE%t _&+@Rc̫q0M QQ=39]#Ea5uxy\1B%-Y\Ƭy#dZ{ wDm\vDOEȞʷFj|c;Gɺ4[;-1"xSPZD1և:$uacƺtvXX;{օ5Zu-]K9D, q&Z&'OO/*A0C J΀_y8$rkv[ً;+i_*3|BB >kz&qQ^&#Y;$LL$v 7!M}:*a%3teYW7|hT~ps" :Q*#Q ƳlTi 5n~@qU#ZnD.߾҇.["}O/ޝ]2mQY{ Da;t&wkRA8;d+:zsξ 6(R0Xn ;@c(lZ[f͹2*h$c_[B7ƒUjfo /a򻩿4;n ]8==<=yȮ eklԌNT꺱uԠQihʳ#uʩ< Y TMel2 f+}};|Xc9$*"ͷZf͹bdzq^s1j7}ol`{ϖRA-n ubL&5jEQW4ARjt˶hcCNCV+)3d$I;IQA,;dC#`͚s=%qƣ[Ǟȫ,`_Q`"ɹ$Hx,24(fH&`k2ZMcR!  0 CP٢,C2dXWY#dufف}x#iPcl&%Ehla]5;wU?Vxb:<]{Pף۬J~۾/l%K8{P1I~|У.,ٴ ,KmS7s7 \zsmrQ ` KGQRx!Z*Hdr)B7['.G.A'rrh\+RKƟLl -֧NwҢ Ks]$hPdy3eTӂldD)eaQI@=a!xꒀ#߈IA9$0f'dQFrT^Gd.>Ѫ9A:IMrn?&_ә5vN|_"!$_~,i%vh8GBG<:'ݘE~xN:6hŎ\M!EtJa1R.]yJN2&dvCw=k>{yPQ:~==ʃmPi[Z8P_m`ߜyOHH+4|Oь+%Xy{+߽6r%x_[pf .7ҟaU~%_ka݁kv@Zj~*l|)Rw>?Z ׾t thͦQ/AdpT;k^;l 8ApY(,mM508!+cD>xJKq`E-ϮN]K+(Z{6.`1:I wΥ"cWښrNެ9wñYgW7nكvמ!a8YT^l 1STh0+y1;9R ƕoF0l0>fab'-Qrt6৯z`^{} /#pW~%jy=?N.ξ^o5M.ŕ_{^reX^idZbLF ok;7;qdoNQ MNȻuuK ܫ_+-HhR?Zp;#bI}.0Hv ޙf0&1&+l5;v.dJ[]rNM^bd0-&yKe]kKI.yu='k{Ͳvvêrfw ޜYc{#6GY펲= yԘ(Aj:WNi{HoiyV$O[ l/ /߼g:_'O>N>߯kg?L?Z^L~ןWk-?hkzyV l0?FސfbBKq9`ƛ/^7_zP5m Z2C b`?^ŚsK:4-[ۢ:otŸ"KhrآeM"#)M5#!x:ɢXCd&d]koGv+#lC>Avc 02x-"g)K ܞᐴԤD,ʺQ=͚UqNUj`0v7cv[\D\T5o1B%$ѵ{pGK8F}xoW[xdY=ZǬz:x! :Q9E4wLe;;9J]dzc*R8鑬cgdɘu!gkO/Ƨ\|9i*ҋqZJ9PJJ-Atz9)!K&}d s]RwLMZ%GYGW( }"_' H/"MYA--mĝXrAVCI0Z4XL!kmMrzE5"zֺXt):Q} ]&|J'{X@䦤1KuXg](:ޞ6TeGݎ0H%G@ ָh)4"^PTl@t[ x4shq9ݦGDnDM_Zo9@GVLAG= ]I4Z*#``!&SAyCs ~X;f5:/3(|" H&jZ5 @6!+*]6xѼy{KH'*@Pǣ:8K7 VgES U_~^j lbZ1pߩicU0 vnމiߊ:YHև\dPe}4 \{#BKzt)}N?bVnB%| \:$ճR"T2{(%g6ܖS5Y$a @AK^ Bkڝ2h ΊmB[viXp\ @(Ф pd..h]icQm\g$0S(ͮJPHƁgUkUPaVB]KzKMϲ]tĪM@k>Zl?Hc%͐ڳF& hH `f%݂ ZMުh{9Ex'oV Q$4y6*a* 0dkJic=A;3{Frha\,׫ϹIyjd0uvtrL'Knc+a0`Mӿh Nd2QSۆnMEZk Q^y(Y=y`4vMfc`㣄? 0#bpH/-ȡmM1WTs\@7""Uw[hg JTEJԃ ( hTU")RC`뛷] + EISL1. xrSpM9%I bX0p0) RD 8Ff,xXLawuslΥEQYj0Q380)`cFjifz@IQ{/AY7/QlZv!;tsrgA^ QєD oZajlJ؝5z-6R zxX@Vp*#-]A-l1EO zȕIѺO30%7aFF,s,XoCnVM5墑r7tid" J.Yx:h[7dS((*I[J#:ouVW4<<2B.8! nmnח{qz}vdkbqn٪dR WTo+u\q@NPQI)i7k?-{(;EQOS6q SxUj>@d  sQZ'T@_H+LJ V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%ЗRa?'%r ¥@6>|%׳TV@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+`%؟ФBِ; yJ Ѭ@ܷ+V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+J sRy)gZ|6J O;V:J/Q u@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J/G zn} WՔz^߮nmɽg#OQؤUףwgVLU=XmlgHUu˰pLWpe^\+/Zy9l秥 &i g'G/C0Ej!҆͛U8=.6/''׮/l'oب%'_,9]QӢ[3Rr\q^']Y\\C5xax^_^b{0n{&94i``E7-}>yEe=GOj4_cPmLn O`@w75ҡKB~-O:?!}z؉uévq;_9K|YwMayDtC2V!MR[ un{d9&>))Ѻt9տ_7S}B@Yx.;ޖVuS? ݠ=4쇱k*gWk9]kPXAˍ\08kBp+@5dzI)&!4idWQ[o-]si?94.Wtɠ.kkr8}nOsNE|RҙlJ8>n5I5Elmҥ^"m/׋B m7\ٿ.*Qk' I?#d?-Gi7,yhŵvѲRvno1*\lK6{ ~jxރ76~E-I*ɭ`1zƞd2D-@tL! d*`̧/gWm0?X/HhY;Fd]^_VgȿHas Hq5,і/bِyQAdV/ɱf{3-iPѡq"ИoBcM>oWhpbXQt>lj$ p1eeut-bMF^ȱkEẊ eR޴㞂pO;8m۪m=P&?uM Dj S|;"|qQgIa鿇6Wcv >":P5Kg^:"npٻ6$p9,gQ]m1Hd'W=|K|jI|s_]w/+KCxۍDI7R}2X8nVgE/r,k^ѭ-#]D~,V4-._hˁՠ7pN6 XhQC;bA\_`~W'J͆'{‘HDM:?F Sr&N?)dR&\D0ǔB$BzfHb.VG5JTVG՚sݶBMVb#}fB#s4OV ~rP_lf!&H+ۤ?.?OJaD*Z2%\k,Lq#:&^[}3Aψ< j8h~L9k6ji|pI|:Nu2ܜSO[z:3ؓR!=!C:fp=g^ [*?&2%VJ-,SdC9Ã}F!Zr7F}3N;.dm[m\* /O=5Q-w8. eY9,eNdٗ%>9KcYZЕE}ϊyP*pu,%!(\&kb: eP[Vs8p8ͻNǾI=cSM3+]m&v]Vqw?->՚ӗWydb"#.1ïg5_GI-@iCS7|e DAxw8xؖěG9k0@,[^\RxAGCjFb]3: \( 2!xrf\CEZ Le^9\ 1+1*c1Rk F+e& u)@(Ҧ*=)@U oBfp8g-ONYl'j{=b*/}[,N iOGƐ}"8MTY5JyGF c tmn) lA/pznފb14&©@ǒ] N bw3o`'\t~ff* ?}?+24o_mͻP9BY,Κ͠YjuX0 c# _O7n;r Wq@;qE#RoYu I~CZwU[d=,ln?qzŮM9$w&KM]:?vt#6m225#(pUKjZݝp:/g7_s09p9#f{E|}{ʫqɽxv_oqt{njy2z.,@QLf)efnzԛK0#;Qm:;xF'h /F&F4)FpHM)8%Im[?mn*`J"c5 A6A h gou꒾:ѹSQyLp >iL΀Wҕ2ҳh5G <%o]XmΗRƋ?>>ݶQI :B @AImP md.3Eߘ1V@m*Kuy=)L݂&qS릨v?E,-E=tyZeKohT, ciA\-hT*7kb F}&~diP(bB$Όg9Qg^ڰT΁>;,7 V79H:yd*$GƁg*AE0d*fg .Tv՚s6gP%~B$9X PAv^`K;6ݮާw=hpցֻ `.*FYtu: 5@oEbnK:\n~?j'KuѪ|ZWFO8hk, |W淩[+H߽gۃ-q-z6SK'րg{CgPF./u]m+r|fZ[h7ŶCZτ9i0Kf4U\T >72+(dl2 ,v #]gr=W[\b\Y|*1ȝƠܫ=xrv0#@%2T>~8L,k d\LV(`s1,d՚sGٰ/Wm{|p-{j'_6B> U& 3ALSbD2˶׬XIakIf#ddB~v"R.6+-|f%jjjBPHa)7bE-X0>X`ڊ&|B8V2O)@KYrAcU) `r9+n$pd2R-cJ=Z#T[*Bg82Ow@Fq:Y'؍oM m%)+UI4ԐjJCeĦ9͙~T}UMuAGilČ,.,>ehPѨ >g#6Sb#K!=1 XaZH:*J:dk. 13%FӐ= JlR!`V0/2m+"*\ym5k&fWP;t:3j{fs.bÁ73 kpi`)Քppt2aȅ4*KzXю0% r`|a#Fu&wlyk~u(4cgl\'"fb~=" -b. #)P ކ(ɀD % c V+<^d}0B3&@A0pz "EP :$1s:;]H|`-\*W|٤d_\ęqʴ`T{X;e%&@cP Cp E#X-TBx\ f}d=ҝ(ήg#[k#u\Qю-g~T\E?fuߛO\g{}Q 3ۗ,RSjhIm>hE)Iy9n>YNҽ9Vb#a;?A~ԣ;~4Yn9v9̥,mOFs`\缆r9ZYK0X`" 4XD>jb w MX5ƥjg!#ɑs%mW֖6Bۛ0dh1nr?[iӅT:)g1V0%sF(S'2cal)s0oڷh^{̴fֳh0r:asј)­Ijm*'LzZG9eA" iE@Qg#a ) lii=ܠiWW}ÕGz!!G$+帓FLb 땥Kʂ-V06HJ!g(rjF0a9W+)+!Hx2бhA^apQ0étw1f a#HZ 1=5e V ^'Fg& Dq$)oF܆Ps㘓H !Mr\0g|:%Y[F}JF/tJÃ&>edL_R:wv)3 ωD0'I7ޜΈY􆭴TDDRt̪QXѩej2ZT2W* re((8,BG,ȓq${όGLZd I"@PJc1R ˂2HJQiٴ{׺ަ :ۖ  =ԩHQnnh 4LC'3 x-B9ܧ0M ,#=Nsr;K*D0*;`o=uH ݓ=`|"yvT\ˤ{77 bD>LO/o~>sY}&uu& ߥ.aee8QN'ct oҾ6?2e{KGϡgJL>O^_(Q\r|b$M8/)}7(~`K+bF&%HdgCk%> I5=?4?[_@B ˛wó? .Ye]L凵@AWWA}_sbX|A6>Ds98@ޘbcCa< 7M'h~] $IuhO~Çon\ WAT$|r]߮)Tm l_&lo5dY>UCb4}PnXʻgMѕ&?cfvcUM=)XvH?['slqE{[i_^:mo]8p&z-MڵgvÍt% ۗ ܥeU. ZzX(zrs3h)i-D-*%}dβ]0H!=@z39>8ݾ,;LcBeZ(9k{`eW8|3\MO7((P> lY-Pd eJPS$Ť/`{OvF̿WuYGW-*f%c$4*|+5·8"-@!Ysc Ja^iG0@$#Y@SJx->c͜#[wZT.%gϭ!\J.>q:EkɳL-Ec*;bI<9%`;؅DƎ3Bό21%u_S=˺=KסkwZv *˷5]_.š-K ŨJ{/.¸Cc^C(n&FAf_qu5nu O_.!zޤ˖;34{4ʝ8&5؍|%[lkmEn=z&EYŴUZjTN@!RsK.PTҋRZb4 zJuyZpnTzo2_n>˕Gj՗~`GT2gvX 'Dk| Qa$H L{G9DŽRn&x9."( -J)'5sf~8@BG\rmӺ~>┱ y|(&K鵳Se1vgX':PN~i5W"d RR@StuR9I;:i'XԊgrf + J4 JN2+j^y+#LG eZtjX$õFMFS`̬3[wUF(B ٞ?^yM%L@0dj`$>A*9S`&"1VDI% G|2VRļa IMvGE38$*J;+j8o灖ñ^\wI69ٳ y˙,ﷻ;O'հ Juחo{)Nyx-Xߩ.2[vCᬻן&*cpE;Dg#E=B[i7-6WBP(+2 OGkQbrK%bdFBWgZ{8_!i~TK{!XDP8 x+0LLJ?vmQ(eGD<"0)0I^TȲz{a =%C.I j]Kj1 uVj e2$JřBu֐"[*, ƈLuTb2CZl%"oG\M=,v.`,v/21elcdx͠Zqqx(xL;x+=@E፛8o;u5I©4.IpG~TȾovǻ;='qZۭ7%yUSl|8>gyO9O~\o dҚaHEr@2 H$tiIdoe610w}}IC}>WҥۋTxrgm@'gC=7%K#"Jʼn T9iS>Fҗ˨SpBoF'AOC$sϳ=CG a s}5N+V=w`12 )C ) ~(;ŵވ>+onӮ3(oQќ;Dpx dddFg)2@ dYKie9YqMQ*a 9s!Xd0p$u)U) ]%9Z7K\y`U1Bi{r=[\ J:ǫK%WbA144"X`R.*Rd([,%A)B7Qo iyt{XAFϩ$cGt΋T MFM ŶޓLH7ZYh,d,<4!gbPeKe:i+ BXlp51KGcه 6;_h{#ortE{RgN\ǔR,.3-sB΃ߣlR_WW]t#gq4S]8jr>~<ԂwAZPn[9.=œf;_W&׃qyTL_R.wtK7xcMcM5ѯٱ7 ޟ>"=HR9:Ґ. Bb=4m),5lR2 NE2D)æKኊLZI笑%yV3jg+O"f]^,^ Y?Mwzջh*FT~S9p\}YGfϞ^읙 s}N.,lJ##T8szΘu|L`( @v:0ۻ?L/o~SanV2gmX[ä&Id. W[߰Kq.bvthI}#gYϮnթWtK@o|1,_95<=ұd8[<$ [20X4a,V8oc؄لޱ~.Ӊ'5mh.V9/:l(8ҁV2}ƇZMD2GF;X8YZ>ݾ0{3~ޫ/>mm޺Yi+r2PSy ,E #&.fFDMĀR٤s:"._ߋ}#0uPS`3~}d,8fL'  񉉂#u:BO/sC,9P|e ^*|lYY[-! 8Vs1y*]k %G8^<2rONlĵ{.h<~:L'pd$h2 3x]dy?֔i;F>^`c餌iZ\ƱwA9{.yK3"?>'~m_>$eiZ8σm(Fu(|/_)J>T!%aS[YA_'gy;}5}[ez~9|=h5X_s$sR (fإ̘q?o8~nɗDuT2{dx>zf/\f NO7w'>ߺӿI]j̾;--p˅uU _@&G޽)VFJk )םYpZʀbSƅ~].'}ٕvD͕ٗRh*av{}8&?iF@ix+w tEXH i7~u6YlskVhpsܾ:q"+ 꽥pNXb&vG-7Z/ n]޺vfc b j֞k+cƨȓuRREE[<ӱ>:ƅsȜtpwK`ĺCn2;z)7]Bjyj=;4(n\˳x?.ΦMltZ¿w.N);_n ~#wR8ivcy}Q_1Gw߭{{חWZۙT`4תD uLvب۫4F . )oRv;X QĉS1W#}͕#̓xҒ K&g34TXf.Yme.əeIܗNmнbhtN}9N߾/J?R)JԦ(93DQ"sD`R*r*sl%JQ]EHZ1E͡EE062DQJ `t)RdٺLdt"lhe&Gryd Z !qg׀ U4$$I$h^ `wy-"mm RҧR)hK6;qu8~1BDŽ|Ʃ¶u'lQZbx#Ɛ+Lgfj^| "g\%3`4ize.#/.D;2h!x1)û dypDP+'# eKad$\x`fZS !uGN84M˃*tCY EbȃcPmVa!v)Tƅ%XCV$+s 0!h) \93LKB!(n2eꬆ7LjPmʤ|N`PMNx*!E}EgMKa YE%d h*' ϊH8 d0!9hUGZ ۢύriE֖h5Ё'oU{Ces C8Xʦ^, !o),䠙`*+uT,?om," n A(hYnj`&.\&R&LGUl i6yЎ2;f20jxrI';43Vhs^Ou@xbhdr[< N} z%#RRO@~3kiHz;]YO̻S|ff#c}moV(^6@hۛY/W,~5bH7NpЖNt7.PQĔœ6d @ҊM.@&M*~]SL'M t2s&1 f'" pxW'ot.̺s;2ܓp2o"ɽn;cXt}e`vP(t}ݡWExU8uO;ΤϿzg[rq}<ʾ;s_=p_=cgRPR?xzq,2\l1?>L{`8v/ױ$=?ޅ= PozٶOU|Ryz%7OyACd2mW!r9\ p^씓wٛ԰o~h%:l+<-~+ovhkIu!0|}{ǻ]H<-5,\~fa@aǥ} Zau3zN6v ^^8uɍ`'?aZW$*bQa {[eb )#O+SHwԩiv=صl^ [_"K뫸&]K헷Ot_'Zl=yr(Y>k+0veOJ[MēP~x/??ES9>ե]D#pDm8\ιOq$InVv܀h(-: {MEڢo:,C]*)n(/~"%_2MAZt0Z W∐+˵u6_e*d X:QXҎZ+mMNZB)]Rt9)ʞb]({2ss_4/hZF-bV(jH cNp)rN])fΥtq\ /ޑ@=1ɾF <9ʵ&ZI sP@QueJCU Yb!h< {󏵻W\ף\ch.y4>ՋDɭJ { c~)hG˷tG'ďne5DnmyQۙH?}w;kJvV3/b0 *],)JP9wO=g{dџN^A6Eɡ_HS(7Tj$`lMRI 6I&)$`lMRI 6I&)$`lMRI 6I&)$`lMRI 6I&)$`lMRI 6I&)$`lMRI ~xFRp=I}apAFr}FU1wpZNi? 7- m"d%(*@p>G]kQT{3vẔ&m2:g߄rEDRG*LjCA. b 碯R#`1LfxEݾ껥nG0CqrX^rO?spYjp3s%Z|4\&fǑ^# }L?7!&}40<5nV+S% VڢlU&C^Yi@0Uw:{w[u V8.U+h5Fo/ -f̋ڶ(nf.6jZ;ާ}4uæRa!}߯O 7:h-2{XKߟj8uk3LoޕL\ʧe?gHwg\21Ҍ=iw-,>+1Z里 n5R}lݣs*=wYg4u:2\鷌ZV Jү(;wO'Gwiz 9LSovN>xvm0֞ rl?3$@ Al'w?U]ɺ+)J{cv:sb.!Gaiw߭zqz蒛Ύ^U˯ɱݫZ颤òA2aS/ iT6ylyRk/9Od]篾ݏӿן^½z_^=[TPLp&r}6 ;ՙuVʗǿ-u{d>OrA r҄ϫ?pRcO'n1, ~fKzKKa\z[/^W:F{(ˣ9&>';OyS"帩XXf9=q -vv*"54'Vɥ_q#3PB/J.hKIHϏEAK;JtEI-iuy(GtAh~\d>qn]oTSa^!m5n;M|ê&iVcه# sy0"v/I0&%en{!`OZ*iɲ^ƐhPfF% `kaW<>{J5!1ҁj Ûz%~7XxH:XI2$QF)* Ż&K$ۉz^BLHP'3/rgc31RY>A_Ozԟ9裴 G5Jw^;ZABhv4k YddF-pW3Dß!@B#tvhnsDibԙ9sJ:)sJ:)sJ:)sJ:)sJ:)sJ:)sJ:)sJ:)sJ:)sJ:)sJ:)sJuN1i!/p`.F~ s`4M~K_$urUY.m3>"^ TA%ozAbCT%#M v ]GB*\s+5o޹s ~ D\M.xNuVr6f[dz+@ZMNzSb(nbĄ'3}Q)8ufKG/" X^ {,EFǧy#[GpdȋN~xμVd[d2JHVSW2WΆ1I܅=7աTٛ#޿, :wp|{jdžz^`*wud7!кuu VO sOvpPC˶ bMVI <β #x2s83uX`)B%1O$Ԯk5y&䵚VjZM^k5y&䵚VjZM^k5y&䵚VjZM^k5y&䵚VjZM^k5y&䵚VjZM^k5y&䵚Vr嵜W?$y-Ñ"sׂF{y-XiU嵼M1q?,&А<h=^4!tlMC>hH؝hHЖ5$xb`Z"{w]|զ_,Ez'yw#MV̮e5n`{&=\ 76LÌ.syNx#p=GQ\2#U0!VO ֆrZ0fvZ|:OcBƠ< wLHǴs0Z`GZj1'9ď]I>ݲ;tĕQZs^@햮?t:A[ tU~4< A%w; =}?OΩW8}z?m.Bp,2*CD9V3л8_4ۯ{(\verTD4PLF}>0x+wj;*t qCH`g>3aQo^w_:hIatgۑX haTbu A3@6&Mg t~֝<U/Z䭤 j9d:;HH'ƀ]`Ӡ߭x+@0 >m'u4{]ޠùܬoJ}暹jƃ8QTwFW LڅuV]ba'uZ֛Yt:o׵f!,e1nz~jCZnDRf97y0ۅ~I=oxIu@ЕY]m9v[-)ۼ&'ŏ758y&֡Y؇ɜ;où#Ùs'sdΝ̹9w2Nɜ;s'sdΝ̹9w2Nɜ;s'sdΝ̹9w2Nɜ;s'sdΝ̹9w2Nɜ;s'sdΝ̹9w2Nɜ;s'sdΝ̹9w2_syj=Q[9w$ޗ)tb!~PۅGNue&u΍0v>lbR ɂt$B*0.,FHLyuLֳm " žc'FiiTD'S5<,{clN30SXLv4jİzekCZh!X g~p'韬't#ThhS]ypkCWwtv%?lى_(9yGӮ6(4RJ ^@#D qKYI qHAOq4T31i0mE3/BHTd :Ax6A)OD" <6PGN3V̋4ԳJԬ&A˯KT@V;*,!ۅ!$̂Hhp'·WȪ#؀^w@@vdJ>Z:&pS}b48P̸f;MD@ lkd+k=%2.Wmsp=}>=7\~0%Yǹqz-RM1* B "FZ*Ak"Xxܐ@}~lc9ҞNQ̋O & 7?%xz:2Tu!.,U,nɱkn3'dޏRȔ?ڙc/Y0 tL ~EO૕i!nwwf 1~* 4LW_}*] hugt 4bUʹFYsJy|e9$[(}*ZP #hFP+Q3(-P9,Lv>ym13)O= ax8ŗKR*|^u&n|l2 s/ѹj~Y J-Ѣ"a`#) i`>*&S(nq]\[q" l"ǺŗR"` .,$2Όly6L6LuqȀ9vª&U|vN)bxЩ"6 "/ìfݳbzԘ4HRQ V1N5f% uT5kIb'_w^YSvYYtK'І&%f 崚:Qc1YDcKn|5h$_>PYɩ@E w YC&&I?8Ḭ*klbĺ"KU^S xC+A~P6l5 RT۹Z  崚ys$N ?d0}*.(YOBg\p2U64K2%|ԲlV38y7fmx*SFN>nTNQZ*dRy$VdN+p8haP-h1<[ ]QZbdL|+,cIXL*']9%JI$w|$Adϱ㴴fŒbR!aј)!dALRk`.29 &"A" iE@r=:aRb&y$8FmbpF(Zp-1CF]4dI[c=6MH<!/@"ttRf&xsUT#19 ̴K#CNX B8FSVE#FyR;mTeim8*HEYYIYB6HQFTsΐP4J8W!2Z#ŲBY%ҙO3#0V."8*f81Jy8D *)J{,(kMw\@^$ [!{b ȱ>J+1C2a{&sKm;rK16)cCE>dfn =Y;Ov~ިtLa"K~0e5u4omy4_Q[c(!fH 6HFZ{ 7e{-aUD:T1\|tu5 O\+nC'U<ބ3v/>Bg&Wv~6۲X4k܊rO͑\ŒKVdY˝C:viH`\@dD[pK6 R1چThS5IȘTu)2A0ɵ r B($S1f4t:FY^1Tk%iggJIW('KRU0b m[Dy <|zo$ =ZUŪ5*흽icLP3U30rm$2 SA FɱUoE/  TAӐSV`GF$5!Eܙ袌(Ϩeb!*UN5fF f4[ʽtrX>|kF+Yf6%wm1JHEiIclsk("$Yq6 0aZ`#V;eU? Zx%kk&"jQ.c+uKm-|#\-Ғ>ɻ_j֙\-?pjU%_dB8{;LcBemlSD%%*(h]0- }!+NJMKT҃^d~X-Z ,F'XjЭ"9騢rI .ѩc)gW}UW;R9ES"˒PJQ&j@<%wQZ^bmԊ`J|)i3 F.mAF *RYU4c,t#EDM7{,^.(WcT;d7Lؤ&^ԥ2fDXx*͎ǶǶZ U \+ ^l8F$~Dn$N1ұ!RQSw,+MN#!gmhbcN|džD$0r뀚h%B0%[c` {oF `yl"Rhhϡ1;i{wF 6F0e KX"1]qC OY) N# kmm{awD@1 s2JaQiƝV2G$k {(?VQ8]I̺RN%"7 iO95Zomp k@,8 8YtBS+U1u;$smk+p@HiC*(|&ARqGD @Bdd8Q^#H ,菞`]К*: y4#k0EIQ eh>^6L`U 0W@oEHr:|<2qJ6 1BykOJMB :Lbp0 oh PN lF"RHP0u lwO``PVt┞=X-M% ʞ:fވ}ɱHJ5]2TwʆRNm⢌;~IivcNujktS\yUx~5*g n?^v/z3wV_dq/C $ґ_8Ts ϫ`aTNMn\NnQV5WJxrO)Kݴ'F7_S `eˢrkA;auh0^34Uju52.osKv~\XJ:Jmyi<07Ilwn'qмAc@'JHmG*@!0᳒Ogy]Gƽ N~i}uVzWv==&bLTG|cf/M=#3mT5hF4<2엷@Q)C骤./fg9wd_5 [UHJfzG_:)5WsJ|"GzL~/޹ep1liI#|iH(@<|&+yO-`7A?a6I?s#犡K,;,- 橚][X€$M""Hkd&JS#̈́HAXZJ&6IX_T()_paIR S&5m>h4h(Ig@5d S,FZEIECNAC5M>12X-;dow)DH5# FQxgIlUnDB$92Oy*u\I_+?ګR7)Nb.<zkq ^tw?,Fa-p5O?~Tn&r=(gT"aTBQ2I!3Sp&TVf_frLo{s1!?tV_1R䉙TM BO-ep/Rr7'?"}Y-ST].>Ԃ.k8Q] ba 8Ƃ\v2mMA)M&xbhFPFzz>]8i#h}Zf!9Fz 9V <;K*D^Oz\P K:DAM< ̏Sg}o^eCw|Lq~iܻcL"61f ,% MG-iy\ޟ۞8[vcFLn9sG6]Y9+^V m 613=FeUI%Ye7/))YGж`'I\^hĿ3cN* ̳"^geN5u}Q{ǯ_' Uƶ:j64%J?OgP 15y`<.P%M'T#ɺHq2S8Ȱ,$Mti,UD'$%q-5T7O#㕼a }WBZm7{󣕈r?Z1qqƕϘ{kQ7`9Ϙ.)4G ~-.SUk m? _캒BbZZ.̦awBnhmhy77CoHjѣGoA_3Tkh'yb>) ȢKlC ; `rgTqj^maWFET6'q:1{ =J7V SWvfѽ.f3WdC"yXF2m,bSjBH"oz7?<;kTU_;7|:0.eA"zuU6J7ho3O#&>kd;sx~ UuSf42ln(ўa^|y1-ITG> kО;9 kCG2, 7'L ̄jfƟͯX_;F#Xu/ۯSl$,ْ9!pjv5yV%a] ,hص]xqbK8+/AV`j5YuZBv?Joy8Zis"K$;n,ۙv}ӳwVHk_d'bLmxUG_]ѝ9YJP;],KlS<"M$M1T`fSv) FPXiH@yzagcE`icֆ}GnfN5N _BI#vt[E..gYpVK=Yb6j<]f5  b#Qw?߫m? GKj9  $'LBqsڬe55D(AKK%(5^IRf(+Չĉ"YbzvAiEt+zMز&d7μ䩢SΨR$SҘ)ʄA) cxj;EeP1Awntm 9H7^۾=٬o#6syYxiy)jqQw)8s}\ ^ Մ+ApKlwr3AX%O\2kװ_TmMadM &:CD?ڊ0NҷQKEy[qYʋz6Ɔ=\^aOzMgHi<zM쒪LjG䤱J.՚%/mh#)h#KNl0X+^\1qF3#gݷpvluP$L)LNiqeI4 +hrU05! 9`c: *jtU XK^ݮܤyD ljE\Yy$|ւ%o2y}fg6gU6" L^\2Xm+ݵ+zgw>$A,*S8\y*6@B)-sBi)GPAr{BZ}V Tʔ#qe ¼S[xaDv *:6nYxnFS|Ғ7|/9f(TW0Uu:U0\.& NTV*ɕsrf+@J )SS;l@%ۤ!׀;,J}}I0s`6~^i\Yd)3{>z~gѳ-u<1FHew$KedNZ&xQ4K݈%Ϊ^N$XZ߼1'49Xu>0' I-x)J)$Fw>ƀKx ?Foḧ́NyRO~TKܭoKqHMT%!" Xg B)R* =Tz yLyzmH9Am 7lɰY?892 \;wB͑PGMAi 9KFaḝ>@"<\۲S/O 0`p6-MDsɕS}KJ5td-+Pbk:v $wl`&.U>r嬍΋$d!h;BuJ2Kt㽏&,d# ;;2 >WRt^ل#>4C'pm4Tu,Rg6O;^҇ C &hYoNmsBUKRF/H2^Uj> #iSᯧʋ2<#9Ҁx1 hJ8Šk-#8!!ǩЁZܻƒ-x=l;x04Hffٔ t1T;U @J>,p)<=i7(\B /E4T:)8ctT!^QB e]< )Ř>-cw& M&'Q y AΗPa_VtlD*HAG!1dWуeJw>! h'5$[%+"]6!8AсY-E,VgZdAvs/hAՎ@'_v\Krw >9d݋ӃK{u73 ℟d4Te}͌,ÑӢrsяtq@P@ =oC܋+q6ʙ@#}}F.3y6r`4}sd~ySd'(JtcʻlnP}b*vlx;Ʈ:]PW`egg zWEB 1\E^vR ķ{Y^Ti\Fe׺&hu!=>y8ye^MT Ԯ} R%KO>M %}bQ ' Jwnbh2LOf܍fi gT`D#ȗ㔊6?)(! 8R= `?YQpxvc xǴ?5Ly.)Rkao⏧]bnz"ORvB1| Z!q*|f (*붇/@pRvtX>y蔨lѭY$1ѱb͙W-ʧ>VWk i$sulMP:5JF㖆ʋdGl c6) >;")%tA]}|;AK#RHTG(k>MtD@QQY8mݻRSmh#j'^E%RRҜpVq=o@I>T֊d,ywԝҍ=Rl|qu,w;e'MS2'ѫF sm$>$pM6D N Zzؠپ(riw?+}Ac s%gj^]rwuSCX[-B"5I+mEOiuKyC?̠dU\l,ɾPMɤXTL"@lYSŏu%zhlQڤRh 4^^!'R.YͧCfZNxx3"0W4p%2d>L*e̸ai~ #EY42ӯSv|,Nq$!M5ER1^ gP+P{vhUQGM$BFJa9J,`Y\v}yeR*PW~?08=;U{v*R͢Za :lU]bVA-ʬ2ҹ8J9oĮZ{CWv.GDrn3ĪMƳ(oLF"Nڀ&pmAӔsZlq֚ j$_owBxgixB(Q\$8PiKcчշ ˇJ!^ !n=c^pCe}mvl8't6E9bԧadaeF?u ܽONkd($KAq-Swr[dwffg%4zm5f' ZsX'h qP9i;d8KLN*E/~6^ @LP(O6 Be j(}90x ӓ]ef j,E1|o‚/@XWJt!>MЛ=GY?68d)LsđʼnNY] b}Η׻b9 W3d?)Luz/bN8lJ#͹C8x*Ww 71 s!t·V!vy7eNLan6ٲFa7z sM"%KQbx$SDIbMtסŷxm?OM 2}'Z%$€r 2'>m}=\F*J8"F0-IW).2U&e):HU 4q<E1ocxG iRl#2Ɠǁ\*#rIA(M<1Z~7y(nW4F Ñq"M`c]t ]@! 08)uly)-vϋhP7PWS­FauN5|>O~ s?oE8UJk{}saѽ&?a0];d1g]љb$Q)J-{#FubI]68nƅH޿,s9ͻ_I5 .s!P^xj=Dw7yɝKSD0#WoAd%4z}JWrK11PQ$@p!4/ؐȁvcElTAͽt:rZK āò"#k,9Qʂ wK>(M7jNL…7g\>G+8\-$fpJL}^Ь|bL2{۞ຌP0Co'+kg)`~e\i1Ã?OsGM1?|8ws=J2~W}ܻmynA}ݽ X=8k0/zL)]&YooOs2Jq~Lq /rwo0 cFp4Óܷ_Yt,%]RN2z-zysf4z\_YˊJ7T%.T,.k>qCavȱvYo5.𚜵OzsڏVpd9!s`͢:J%4>;L]?y5Fjr}f',3$bV}ϰgQrI6L/|bm5ݞ:t 'h]͞F#˻O "̧n$'xe)%ȐRtnHEd#= 2=v/) Fmw,DFAeux=0!V6}( (İZ] },rGfFt_XР1τ$c/rJ2d9Ռ`2#h%p2 W6M4t [,^Kx"VYNBh.8f$'M\Fú ؄~Q)觘>^9] O:jH)*Ma5#0k޽")PeOB!N8h~Qyj%4.2]E/x{P)dxi`ɝ%>cqsN¢??W!,.rû\8vţ;h4`OQX3"+@?FO ~UIWƹqcxwt0d9(S# .Lgu5oׯ$<2"RHFq DnL9bŢjR4jH y@vY v-SS'V|ex0uᗛ/U_A4zHQzngߏGFbwv#S']Fu}S p+-_Ag3V3$`D0j:{T~ 9q}4 :fQ78>Ͻ}RH`(m#hrL. pT65=05]{9*R[ v$.MjxJM-Ak$7vK;e U[ʠYƩhRC´&Ϳ+& lVJݴPOboH7H:F :Қ5>)`7ZTFڤ!h?)B#@bsܨ ]{J>GJXF;%ğ 0`;d5Hi7J9\:.Q%עM6&_/Y q!\D]6f> ..{uh1?P:pJT 'T @Y3Զ9 :ngN5, )/Ǫ$YAU9Xl?C9̀FwxbpֈSj;ĜH0QBbo*$(xn.:n*YӚ&JH&egmFL \D0vUqQ+򔆼3cb?"J0H4]*~JiEI:Af82r`S$eY 꾂Nw6y)*83:ME?j^Vnы44gc^]gz&tG.u8PVmy=#aNךp2%h"$BиLHBTס`XZO)V=,3% OV󅨑#YB%d#=b$%4逖W3^CR3.Ah1UL<&ޥ$iڊWS]=`,6vL@qbנk-iʹM-GEUL DR>UsB@RY^@ey=*Y5ܐMo#u+ IST\>FG.ɁNzZٵrҺ}Q۵}j F%4.#XvqzG ~634Ni4Zٕ>Gђv=[[͋Af.ԣuE[Z|!~jڨUt$L62~2W؏[ǿ,'ho3fLvZ__]Tݥ{VRiם g0֙P}@L) r~W+#YBb R{BOm^Ģӽ!R,ճ#%+XOT.m*rF6tmdwիڽkF gRȬlRIږQp{[ z> Jx[JǛP3t?A"c y5)# ^ܦutwK AxwasƦx3UyAz! ;n BuLִ2,$SE%Y:,}Mk#.Kѣ:xa@脝 &K7 pCsi%QoDU7#6b Z[[t*ox=/eO{~[7њ(:^GF& cZL-[ZPIxI) جٓd;^Sxqͪ܅G`d*ZmSiXU+=%vS]D#ԣP3$uW)"VOkm|pFdŬ yy:%xhT ,bQ"X<'TqTI<#QNy 83$ xm͠nyvއmЍu@"&6ʍ2nhUqsAar`$ z)80F9#\%YN$ە.^"`q>Kuf mLLo]ypԬ.ܶ$f䠹6D*%k-j׆4-hBW!SʾإžWJio&',rtv W'Lb!ל;SoOd"(-M4ߦm H0o_Y+$:F20,LP1qrQf4̱Y:W_4"A\``YiX=ifX)Ǝy)~UÿD/ Q2yT,盺h=}NL~= Ԏ9(+XI2b)Cr ( &@Xx:T&"q0CG7f蝬FyjDoT!U# ~c0{B qJ~>/i p&n3]tR9dQuYcNd=l:v. Fxԟ+Z-f=K̝ b9+)"طi80i'0 fykYsλ4A@*Y H^XFQBsP;R j`\"1xV尘mUuRIwBV,h;sŌ$"ױ2ߋoߝԷE81?xvm ;(XDJqz!hUzw-kZ|Z*{/N V*ڰ!*/yE7d@E~C86^EygawS g#Pɾxh1,{i<ǐ3$8W̧tX)Sc.O?o)ky3ʷ|$"}7+esתg A}g/H{NrN;p NPGXd6(,C #P}3"q dLG9TKB c)(CDQXN}uSBXf2,T ZV^h돥k^VL`?ݯ`O7I剓jl?\6ʿ٢F;Y,:|θa)cٺNk7jdp:x8| !\(0@MmU!~{!7ۥce/>=؁,B$ðpMVmEقa"#g6bTb0yP t΢F;Mڊg%d̵NP|JŬ;o؍iK\>lk3\:dm͡a{(WdtGɨMx?FPt'r:IcZ=5|( !81Vvcǫ6 Nps:iKƹQӑSyI± m2!\x8W6 ٻIbg6%)3O"0L&P</.p*mJR xV!{(xsi3·zomVG0]Oʊ s 1aXgf³Q6tH{n":@k!g}GAy*BQq$ ᷇E,DMZtA2!Ir76K;U[=}fi*0JDP0VXQBT2e^z]e-%2ROX %MZԱ̃᛽@{̖9sJqrOq5Qlu*z/ệAʳGv=t\B=[ z]ݝ ԑ<~nKFE1zFodDՐBYR\bϛĹM䊐͘ (b/Ua|p%h5sW=$H2nNr|.S פrKty`}>y}C-zί<)W Jߛ'LVh+H;V7I{@a$i81?}tKцy?m15slX?"qr^-fW*_K~lq؊nzª<&,(LdN9+& h1"D dvL l(*W/Jn&Z+.ߧ}9fৢwA,ׯgg_s1j c[k ´.ȴY͢EgbDz]?S Gop8@i! ư`eL翼: GpP#CDO"\& 7 Fʗyd bh$?h8Z=53Fy1ؼث{Pa(LMn!ɓ.p+8hK5zb63ћPlR 9ǗomgP .xZS,aKU;h6s].teҸiIžʅPq>-yLRVG?ػvTagУUTn{B`AP87dQ@yVn_nRZT{f;K ň+K=jҊE֒ݜv_l\kz11:G(z}1|Q/_DźOќ nFvOHѾυ?TKŽ|LOzFҽdȚVy}լޯ?WRKwQVe@fPWj5GNNu'X-Ќ'; n&([]i#݌ep ѭX 1߶o;NVrků)~au2c>~{u6YNL(I A+ 7JVXnt &6*(8]QK8Kn0QMn(5ĎM4=o9|_ wčvJi0ƕT #&\!NnwjȻqa$D 1T:'=Aݸ8~\u (+vzՖl+3Qg6f I|g+[׆ERwx`Az;aR`>ac9u.=ZZiMͳJ8w1 V$EFv>UR"MM Ef@7 /^?<⥃!T/iĄ:ry-h0k6A6,f'X@XAMF[ԆZ O+a(šS#me-r +1T7 C6j Ir!1KX* 7];'E@sf7M*khMluM`KO &U\-Ҧcı 4 ϠNKssc,ax]9VAKGUb_>CXi~>zEӄI9 }.j<*L)BS, Vjj}Lcr 354XFQ%41fGL]-[#rzj p=NwL9#Rs=l4IH)DqSO]nRab4p!˱ yLe`ՆoLy A$4W_pc)542O )u2OG_GZzU˴I!&qvy#ITuM MSmk)DՇ4_Ho4ZۈU(dYyQ ?>b :-}*FbDpYA.~{X,wp4]7* N0Fn?[#Op5XfqsEinA"Ŀ@wE|%ԁZ"npVxX} cQz(CFrP4NBS!ib |9 [)r)c)Uޏ'E[y:+MIl * t@P%DLbLI(o!(6r)Paǂ52O+)>PDf5EV`RUeveeYVưpb½sX-_CIpTaYgwqkAnK?% ӗT:+%.(dLjh_ S_MPSyZaJi{!tKzx"1guNu8#%p ЈOUB)\dbKo '$RtMK;F!^'+_.w>R5)aS㬃1J)}[C l2\SGi>V[ZbzwIE@O(-iM"䂄Ha#!Xq$!2 ^sG8\$o!:~3wmpN׼xKk 'g eDz|;RdRqm$X2$O#"6?H_> D D@qnjd`ۧ/gi=_*̽cG4)Nܳ4N|\U% ISwsYTN6ّp$z+^: e==4ڻ@ {{UtΛw Vnrz 'Y ]wߎ{4 Բ#'^w49Ĉ H)e|+ Kw%p|^ W}?!Xη_O3Bdٹ2`7<͊HM O9 ?AV'Ppt׺N!`NAқB@Tfm:,0z+{=RuGT""fygz[ɑ_1dR"; !lv[dH8OQ-wpƶ.X_UaџlRs,Fae6VNS[ Zb'ۭ;mjau7hLl`ak o ЯZu&#{?ݵJ*+ǺN4=T/L d|:Z}.7b7Ip`iUBWiI1ɳRYvGOtt~$OE3*Q!X=&yR} 9^o.M;X읟l33tlS5 #40Coֽ҅Dҵc G8;IW$],H>:By: ]@3VډJitEu@gsR!-Y30Y(PEm\dK(e,ŐDùo<\se&S]#pԕi73-&ZLM4Hp8LmjѴ*!""ՉL 16L]26zQ-4:OrEF }=F!k즘?^ԙ)4IGaXSͦFG.8]18v@#/o?;TI-ed^yVȼC"ÊJl,oX-2ƼRrGd^.2Ќ-R9'ȼ":+k;4MZ~P^[ĞCd%{VXh}>N\{'N L0-FmuXZᄪ(P.2ȴ*V#L`tK5(4cB*Eh{GuHuJP5~F=3[)z'ԬξHfB`\η˕Fy'c3jK駒݄loiP9+gY!8ؘBZJ` 1gZ!k2h'4c DC8`'Tp.I/yua$}-uzYNlDIn^8C 潡|Sx\<?r-Px?qiZyv霚4j:x"'|'PVq iU7NV N(1T#'o~pNH0B:=7 #'<,ZC= &_'.-;B]Ώz!ҥщ0OIo(C-i"˗-baF^n1)^1 ogϋP!icb:>1<2dvh,! ^ &O4^]Ur/0gB5ɒE-<_:a}bQ~/,Fv/E6>WH}`@J)"1{'Č02^01RHB}#%YP"1ĬRыg~tbF_ f*@ʋWi"1; _Y7I97}0rl(ʌYns)CDg52jmmxI!ut_O'%Dfr+ 4 wH3Vdv5mUBZ&"e bN yx^07B)4,1ڛ,C(MVU'lxhCήiz@R/X3Ggi<$w)ғ526CnYpq=-Rz~FD,$=cXh@9FiD􏤩"Sk5"# ||dsfQC -ˮ'%8:GgD4kH!+a+>u iF9~Z[끙9[DZB@60].] U<ܑ]49AB<04Cml"-YEl uμ4Xx,1`IZ#+XY%PEOVdU7ӏ>#@SO[RcL)gx͍)H̥fP䖅2,Fgڳ/w,aAFDlMO |M=Uɜ:=rr҆D I|'[QL,R<Rư/~ YZ@då>\cZx}CxW- rS%*+4&^؇p'͈Bkga~S.>@pʄh.rR=#7` F').bX F‚4VUibX0>Os:0pfSwci<"v} 3[pE0Ms`5 'XV94BV2v; j:H(Sқ@rCpہvt'KzgQi}Ǧ.5\HYQN\x;i|eB[\"3&~u^˃Yٽ@V勶%`pP\tSͼLr0n㯤wLꟜiuQ%8K[IU,札Ku46Ɠ A9s 4uǍک7zv9jno-Xà(4K2LIž~%c7gq.دr &%,IZcQsq+*\hm5Vp.6cJiV47F9*V!L =++̌4ƛRh:Hi1+&*Aij'<|Z}7VddXPi)bcOw6}zJFR :1A: |fʆI'{e2Hi/f%=}&-e^ݸLFteՎ=ه˲DtR5h?( zޒk6q}Ǖ 'I*L$;*KBkB( (5ǓzjfWrV4DV^ewJY^*Ϟ;'@"I QR L0wf& T,J>còce s_*EӌEHrqB cLri ׽NSMx}Z cG'bj}_/6} eF[Ȍ,̩3ɒT V#cfw*Fxg &e][Nؠ(P"ף -w\3 +yn \*Izgc2䱜KD` ": 58_d ϛ?oo__:iqG Y/Fg!7e31P"JIݢ+{yXGiMFf5C}H8>~Vtԙ"ܒb1=Ḟ^3tu:W惡UTx ZRC f,JwRo`eXT d-j5!_#CoHsQoíQuxU¹%O1&b׈Yu:,8Mv m. spN:ټUKR0BH]ɨa%f 3Xk=z낻p wY c L3IH:Ȫ^} ?Oyli8I_)s. E_T9y(EowH.Tn"`]aWwLj$(ng# Ktޢj)qG޻3Z$ax*qgݗ9|kk2n%-R15cFk4ci+ qarRǶ 153B8ۗإ5o>1W[OWI졪mf,3TTlߗΑt&iQju:\w}{bk6eZ^,!YW=)؋5?w~ Wì𣷽hyo&Еp- Qge$[yAp,Kb  4/r4~Hs%c4IG-Ί4t?쨜DVۀAPDs>釋k=wP'SyĐSs3rdn%0JEr7^04!*b0*Jp5QMlo.gM }>tru{;ZZq#JdD}Xy4+xA΃_̴\w\6ΗrDI2:82v}Õc#MDVǵmvdT^iH/=skD4| ٞ7YӯK2I(=Lvqd;[k͒fLُ\4#-FsJDP3o>)o2\pR[ =bZ2v% ;{C&,SMFp0Q8{B'Q9L(_ȸ>I>IЉ$F@8 Sd^l27zRVf_L؛ȬAEΠn07 TN~gvN`oi\^[th䡮FpϬ'C Rv7|Ti亮<ق;v`a-95b52>3y ׾2~,଑N!+m$GEN/&˼4Tmcyhymx/IVKr7(_iYSö̇Fٖ: _#=:^1g5xmV{b9ջ[ ~IER.9E^k,oE8y^^ / l2T+dawgmwfZʃ௣O'B՘1 "[ҕ\6P!JiL1iN8V"FFZYZCMgK3RIUd␇ɡk8Ce <dOՑIGFtUT GOyUF:<惶)E>S,v=*-Ԥ_uF˵x>jhZ(H(MQ}BQ2B|\@y3ƭd~&7* z]Np@Mi7ZUVƶ]mLr60UX!U!"d7NԨEO1u^KE$.Y`kDQm&kQ̐ve:v2WQiȭ7X zmE~gPW1ʺVpjmM)KY#Z6NuOQl58+ x eǝu}VmY ڭjtuY8 Y8/vXj4}ؐ+%YWw50KiKw,Rv)%g~EIW*#Y73#u٧X@HAJKDi q+&{41oŗjsZuf{jx9?dY&r oپefQN`cؒNrJ386zk*VxIxV*T*wʻPӑ*rW\mr6I@B;fNf8jTė/M|Ƕbr[vZֱB8S+N]5bLV ]N\C4ٮ\F(Xffx]F(h4d>YCd06uFm=8/y~8/q85J)28XC!4`t!Nfr<~џ}cIeXDfLOH}iopr|D'if]#0q ׎㌃ࠥJE/1IT9(] YHz2]o?}ǯǣYJv~87~D%@1 o0 Eix:׉S!*rV0 < 8P(cN(.,&E%GItFQp>zMxzKa Li-+Mf<@\`--*rr1_A.lJQ-g_g+A"*wī9BpbnW/tu5鲃J | ʴ<|8A彊ag3Em2~37 #F)]r+AR7 gR v[Jj0[ꞙtM+rj~oLZwhW8eJ,-RCE ZvROd WNRǃmN?\ (`8Dpfd-21F;xq5yc\0u~GSangwǢBsKY`$v)>Q 9u3Z3?ܚ1SE:>x=Uh&%Qm eՇm3¬.-!Ԫ0+xx]s jw62]*RDZk lb} n=M 6:ηpEawI$RQZ\L˯[4]awܓȮ.K^yf3U&Aȃ!#,9B2a6@!aB'B)$U*Sag*L"p4CQ7X]%,}-Z<'H5][QeZM|9M(8e cZ7QFz(?U8k{OOR2ɫ` o9>}9ZkXPhLV%@xnpzN00W\Pte cWBxfd#Ί霏Џ(Jغ|0*r>BG`$*.]U.2 V@Zr.JbCY?[dԝ@C|-n1#fy}dԒCY_xtb2`Mf阭s=7},rJ㋱pY6nzPu~C͠e1QL&\3שbk9\GB1ÜF/ b%oٯNƄP3?=n0O4WNoX2u6Oe N38}ѴepTs_(@;<|bd Ijͭܲ<ɫ CyWA>&@^$+稦k#ģ@/&־1MEK3g͹hs 5 4yuMm|k/ q3@ }s,qluͷ 9$;_oR?^sDĔn>5ѼeRnw 0[CFzWg,%1|d9 OsR%"f/jR!)1 ju5$BѿO?p8_p6q_?>k>4/)Jk| ch)Pj.JJ̗┘&T8kwr*iOM\g[#}F^I-0JQ,:1-( L LI4Qި&B*֞wo<`kuPI kaB dƍɬäJG=%CZ\ODL 7lJI]~ _]VPH_5`t!Nfr"XkqEˀ@3;.ud֧e̢T_5hg,wTȀ3bb3*{,ㆂusQX-nux("H%,1Hh[\5Åx--5}Q(!:S+)&RU30R xsz*uIqF(ep-i/}Y,EF?L ݳY8nKYCKsi8tSU^mL&7hK!!fRϘj Ѣ^70fר싅ʼnloRtQk2cs+clժcG8*QCwF047Cڗdr`\ k!} $q ,e3_,er,:(3{lKtg325v,[" lATL9]uRlI^|}bkX“Z 1@WZã,;Ŗ3@B{1Ĩ)Q$HWh4wmdXTm&=ƪ=-ᙛ^/O@&3@UxvlٴML `@XDMF ,hm#- W-Ց"DmGP H_B5TcuhhX]( {ڧ*~!#xu!GL)3(~{j2J74cL] K*G ¨jT%X?ҞN2{fB_@p;*0j!B0ԆEWrF&gO6g`<'@y0[<>iҿgtީ7/hļAbASQVo Lj=IRƫc t*[[~$Ǚ_=deYav"o6BO ZR;ACR/ WNQx},̠9TKj̩Ikm}6L`זH]wĤ$a^#G9 {CMlUՏ)`SaGXx̓}.=$^( ZuqlA`4njNa7m 9ͱ%6]-`b6*`Gy6؈&VՔ]]L54~U BA ݞxP#PlOj>Y\'ɳ)Z}= &1wOoG<}.׋ヿoћ7Z0͛>?f4rvQ5/7үjcxn6([MKw {`-fS]{ ׷ǻ?v\^h`Fһ80W_r?c\#Ƴ-\=}sڝ| h:-V@ܠ5bbhWb6BOb].`;/q3%d"QD6.QB:>BcͲ)LJ) ˲Bvִ"N%)7msd+բ\i-YR"ȥZ}Q ^Fq9{9sBFQpV% r<4&+6IPIƄE[0F2!RzC c1 5pJ7N| &[ɕjmcR9ĦܺW|0i3$vfd=f}W&p*a黢MpkR_ѥ5 ,{(5.|6"|== A>y83-ѓpe΅nlLI/t-s.ʐZW!$[l [/+@C&bVR5qZ,OA4"Aq`(2 X7{nBR*(J>Y61Q$iܐ^C"j)UgTZ4BhhoU):W124'{ga++p_ Ɋ0#Rjvuv;$[4У4+8( ;AϹHm [sCOt"e&)UA ԜƲkVlw?K Fz l5 V|NJ d?:ʥc`}P 8Zc l. e=j1T8Il=Tk4װP !:1s5n'z\u:7(L עX4BOlmYnD S&e0ާyY{c ƿ_Nz3}=3Є5Ꭳ`lMMP+W:J^2R9"L8PdY"0F2&~eo?]zƘG[Yf$pcv2SX-k1솥lZ Y$XDw^nrt E;%f[І[ \.L`/ڗo-lYԵ^~8?;C/b:[ 3P')w 8t{|uz1h)%6uV}BM:!Fi좚?͏˂#G7oʲ/=Jُ+|W0k]~ZX~沸C=]3Oֶ!cVS@2YKzw^;D ^:pcgFLكڍ *b#hs\.)}cpɜ$$)HQkݤA^IA>0Y%jD %DZ%)a6a9x͘gW= D&-jP%n)P`fՁ|{c*,FU]EA(AP:Ye]U6SQD~Z**WIHPY U;\+z-]nՅi_=/7@P|*⢁ 91wy\ jfzZ/{4Ӫ|2؜|6Pd,BWm6?Ŏ5~j7So){2Otp?825SA#KY"9vbPJrE]~(#e/ё(Qo-ݼ2dݘ+}CYozeG ljl'dk{ٞ`-3T[({ːnJL66T 1^N\w:A{|e&V(Ľ&Xo,є'. ?5SLکa8VNԖD`ߋmvjS;MieKzw_Vφ I:H8ݤ>|=#E[ߪ /9+1}=d{<z2kh ~Y/䅔|m?@ĝO1=*<˯G}[zt }vv3(9x76s423)Dya8XxR^YUXH/g$&:{ח-k^Oz&̳=ݛIyMЇB %⡶NZGvg܌aOB3]K'b5$kj>rkFԪlmQ.GLXXc[QR)|b^윏;B*V}Rj;Br8*hr'cMv Џh-/k@R#ZMU_B1Ok˃܋'s1#Z"b/Jn(91xXcJ-x1vw2@MHc "vy/W` t %攨 tL#GuM*>b;98Qg/[.A۪ @MfQ/$!Y3lf̀[쓋[eۓyPRvDQf;Gs!}H>D;S9S0M|{E_dGos~[ZLLK_iJr i9& {i1d8]Uנ蔁H߇Ы-,';z>=5tEBhѧF!bs`S/0xqXg &Q0ٌfL6xi%4v$H1P"j5-ʶRqQŨ0!6xF{1`$REz# VԜT uil U>-,:b"<|F+Ɯ;:~[tAll$DTYFҺ*6Oޡ)#9*DlIĬz-˥wI b W1z+ߣmrRUhB^lGZUӉhٞ,гb2*BV6[HY0YP^D[ACd%tȺW]_=t(G#a~4@VVs%ځ9DP2M9px%w )~>:ɼlƠ8DDaYoEû:ˊpy?uJmy38WXh`21x卹P'_>ZU8~>jS|?/_?9y9^/t||=XV..OPjWKHzk"W֕oq?WPYw.Ըy|?-8 'Np3ƪ߿Z5c5 RWY^$Uh'2Ԙ٤P;7\!)RjעbȕKyX9.Q-|'FK1.s7NbrrɭȜ F&;H&!EZVcom} F6>*6T2 Z]S>59 Oh+/LJw4*d[=nfv z st[B8^fZXIfcNq YR쪯PjCm{&˖JtO1uKL]^e)[Wxv2^S7%ߢ q=LvE {'c8G+.2G JN,UF\Cݬyt5`$A&]8)9JF<V+GH-G?QZ~l7FO#l7 l<䜱LP#`[Im`)d f PB9JK+#t ά{/le)4jِ IWj|y4D$5=VmVT:[ZCjï* D~}]X&h>?\f^pp= NvT}88XSrbN5BKGH۸+/|~N{BH!iD2NMdgvm͙([I.&]9@vwTaD),;vTQ|6-/w0kJ EkBL# EI>LTajH]`G( A #EZtJ6)E+76*,}UV>q;n{#D9^ oS9(",Cׄ3dDCD2^v`g8鱘s̗۸` LE> L)k*0PLW=H}97'Rq))^](^΂c1_7!5:~팗RGt0NN;$]aʢ'l_j*B=fN<2HPKh}+5EZI%x/o.Wv  _[ōKd˭+2tw ^JcdkAy{#NbLFK4f'A3ԗ#LC\P{QH7bN\;Lj +;qo]NtOjxPOnZ^hQwT"$ѣxʣhiM,`{ RgM+?<3j% Zg4TgDy-<\ԬI(upz?ݭ^l#W,7ZJ@=;ڎn2%iĴJ*zM*aQ0ǩ#ے\p0_[Z(PRT$VJ;F!U):;F n^}sÑ@ Ql&zɱUr-io2E{ETdOr.¢ \15ǂGm¸zqNVlpSM}( JNO.Hݙޑ⻹KW އu5pZ=NeWU=)9]C\]D㊭;ntq^uc16Tzw0׭)91HDIAϰt*u=0ky*x:քSTNڰBTw PU=&L8NI'/.)UNQȝH9'[~pUg4k `XF7C2}1!&?YI=OF(! ]SȽ{k(e0Hoiލ{z̓Le"b\`6(ywΨʒ& ƇH%Dqc,ݜ_2Np0e֑FȄ&FFU٭ CfCk\D #Tj`P˪ƒ͞}p(߶ZY8UfVhN [^(bYU^lRlՈc1^)fxuQm [DI Ua[,05vn9ULhqalQ_?Uxf&+IǪVl5o5lgZ 9#Nq5L]FQV{Vt5r>dM#x\J/AlG65.$ >R@qQm Qɢ \*{^qʻ!*5 3b *zgԍ&B]ɊU{w1O`8SNFAt7;s} ǸJN1# ȱJ1S+hQA2<+1Qy<@qVF:[hz^y2e8>Dbp5_SrbNȣ%\&)4; T\Sh*Tw4]RA1bO;O;:{ ļx??0x@P@e/ u8֮j_.$KV( '<^#3uCɻsX=mX=hU탬V }t=鰓D'0hܴ&9R&jS _<btݨBD+"1ŚjNJS )b8$#ĤVőWN(G<)@܈CG>TP*^5zQំ%] TCLΓfRCSao.= !ߧ'c$Kބ3#XQx憒w),Г d~O1?ǣ9;誇,'d~},qad.:V_<P-Rĺwuq:}QI1<v y,xb'$_Tvee'#\:%= Wzɏ  hq`Co\}_ tVQyϺܨM<ޓ`&7x%8_' #6}iASlzO diy6Έ`G!҇9pJϬ HQ/7B:a^h kyh}ht'K/n4tZЉB U|S?(b#ۭVh ^/}Lކ^OrS.ZN}֊#~ñ4 *?uy ǁNi|S':{Nq3vy@IzH!>c{ N1 iwXQv0;C=7iΑjR=`]$ J9(,(7hT%4&Ԣ+ xa+W\ H1*-U;LB4vZA0>SD`Թf lѝ7&LQ-?nO(eԩ!dbӢz•Xl-e,V.CUwP \V-U{#t6nQJL)bj*4=g L4`%tFA͍b))ZrՑU? Sr,L *RzŮIN\4fĒ4b]Mߴ'[HTsجćދ-ƖkPe-zӋshӳӏM,*BtkL!JbtH^/yVUoʕVC%N۲O]lQxdukP Awԯy L>nbM(U{@JS*LUp0/,QFUS*LNɆrm:1Fks nB܂_lWQYHb'u:F|^ /vT/ 色$)@.b2#QŇ*Gͯ/ 6 o:*Bժ&"Eb2E] 76CVs>ZoIK_(-R}!߱:}5Θqr ے:kҐsN sKJ|6JGXuf!Up_J:ob;Zx~ wT4Ӑ|y`iO_R1:)$q%rUrYq5Hmh5*3~X̪{*Cu o?] 6]p.E0|ǿ\-bwh>vss08+FZ:ۦN|yfz]gt0\7@* )-Z(]~G4ȿ`-xCp s5@M Xxl=8n˹*U@)8ݻϻ+ygzLڜFԒkiা֘h{:l!e:@P!\i\^w0DҨ(c2ز bmHZ(``MK^{:)ͬ)+_Nzxo$3Y~(Ӝ~MJW߰Ƙ-:f%?D )WW@VXcfkgz?ToAw/"v^V=Ro>xi&/L|dCng ".ea7cǨ?sabԳzI/[#.YYʻ!ιR-{p%R3mAK' "IBvͦ}Nb *zzY`0N?UkȎSR[ *UWJ\tĀ@ORz[CnA<ݰzy͓YCu@Y30 yuxW$)^Tr P:^[^H2g8y@7o6[,ö}[fJ*h|  gl&sxL\%j~8J ={P#Mٺuy(C ,PZJ4\ au&N32Jdb>Os{*%NCO)%(]ܑA=+R:GvOR]Ӣz_*RZr^\uG}&f|wr;;o'2%[[c MOԗvCZ."Jl8?m|3={JiVj\+#]E(l+i'3'޷4+xb1mkRtqc{}{:؟-wdHFo:u1͸"ľ|=)B]ce7VTO|B75O' adF/s55 36O_ ;J+=Nd;˱ {q~7Hqs~EEP%5.4|l!~ k'ȟ3*Doq? b]N/.y:Ũ{233<zKoI?`oL9:w4~9o-快:Q2ӐO]gGͱ7o=y@̦yNƸh=y^vۋUpa S{93ݏv:Ch0`n(Yz VPv\0 Iف(<1pPvC]O}KCQvA9{;4>͑r3ݐvs*Ǣcv:޹$gs0ljh|/[N}1߽|)̢w(#Jsԫ  DQBK45Sȷ@VadVvkzsl솲8t?*mZoʇWu=TgGVR8@־@Amjtyy'$w,'E'Ӻ1.72ѣC^%]}r^t!HgH/yԩ*s8o*dqy0F !HoaO}vO8DHt*;CH0`u#G$ x9z'6;K(n{ ޘS$YRX#_&"ګ`ɛL;rEj̉9Zd(LZv<,:ɧޡb mcAjklFԻmgǰga! ci4φ]4vg* ܧr$?}~o moG\lI"6TບDu)J4vfȠF:*o!9sbƚԷku=?㗿/:>볪Ulz2v1͏On5#^~b` TH Ep&8=  P!<|_"9sϠVr0W#@KT4478 !\6w^^f\~:co ^>˯>"ftz(:zI0:*i(fPhҜwV 80tЩ3DnHG.t sv@f3vC@}H|P] vfݡ9WL)un4 C=ri'$}I;ongr&$"nSdUEE %\u)kO~2-)`mެx-cA~Pl`Q@ع%AǂseюAF}yQQSGL'#{QrꃾT:ʨ=icE+|w2ݐv3K;g&킾gCX e7 ̧xw˓7ӻ)gtP}NݎS@H0{JaIjLV Bl|(D5MB5:*&f51 -"? !ĥ5XW,^bJl&WV@$RP1wx pYSe;v~%KGǾ7P؋o:ujsRhރ/[Lj+$_SVPEؖj[J=)( 0=x+ͬ;|H0]mx}TN_s2S ٻ6$W~Ŕ!h-`zF` "٦D5)od)/uE2EFFDfhL^`4(jlb8 y n2LgJ-(JQOV4;4WT!' |ʜ\}&78}en!Haa&$1œkA}ㅹ_JW8k> ><tL.,::UOP!~\8&7[m`\!p=?v[H 5*a-&ӛ?V.$WԚ7]&d #&w`28|Ss79s݋]lٕa$=4euq煈+D0{=}`v c,1ԼW"Dq5R8Z]duɇcY~Ǒʥ Le ӥ!>.??ߣS po ]Yf1BND꘾x݂nُ=ݍ7A9EV:è_ -ޛhhOgr97<)t1N؍!\]$dѧy|?YB&0f?dy݅xqB;.hx(b(W<8Xgw_jB^h;|4yYX&_2hzJDn|ufZ X7oh6v!Ȥ׋(.g=|NP-)j-IӵR+M6J6;|*J鑀moe8bZ2df3[Oxqj׻xP=ȇfN'\_/wBNzmE>?<;b5#ow/?oGHD>vmc`H/m\G_]_ֈe-m1a6#b 0_}Q{Ǵ)dc"!pb\vNcV*C %XI0hy y$Xg 1+%޳sSkR5睎[vPN Dqôap X4/1 |E/A>ӻ~8F7F}ɲnSX{> onb8]IAh-q,MصQ+\r}.,blV)N¤rEeN3D(e\x̰ sAAJahl(/9`̈́fTNd En9*۾;m=譡Ҟ=-XxKTp(6kaYa)vXODX~tp6"VִjQS7WI/c*i; v#6WqF71.%{MA-l,FSC`]?FkUㅲ́{Uon06wN\ }8cpy]^(f{Z?X[ob:CUl8R<^!4N!j<+3e\L; u͔F1&l򑕟E $KU)|FihNĻľS&d^M%eO"K.%tVɁ*z@E!>";ea;c`_2z>*+)}~v/ ݇jJ߲O'Rd;L9Uk{UDN֭F' &ɤjP7RF6tI+rRn0KXgs߇}y6<Є!!j%B xH6]L{vڕ^JqǭKAT3E! A m3GJI3㰟MuF>q𢜖}N@K0Fv_B z2Ul,!ahmFɡmV^ x8imc{Ā"e35)>ūnOSJh;u\*aJ<$\Q[rb)<#Yn)cю)Tiq*2W$"GJNk@"M>? _0* p27/Z=:e+nįԯn|%p-_%0Vг48fv/c.F٘6!Ϟ pf;ݻyCo0zT@?pA.OLF)wWjB%>(dz;F>'ٟBgٟwFfzx&k#+V*8f~/+B39kVj*hwK}a(eV=a m|y.bX8%4ZZ'02!,xGBF&$LTᔆYÁoϘ4Z5ń.$Q af kuv.XyFU(:CT8??~zx0^C>΢YQ@pkw`|3%Pb, 3c.mʜy! y*NIDp=ۥ|]h_IZc0J ta`0R1Rп0fyysE'N3zbtM@U(lE?bA,892J*-( Kr5Nn$4hqF,<&h>dٝżKe1V2@ef/n?9>_c D@{!_TSc՞@GtJv~=Ykvhl0>{0eMSf!D5M!}\Jb"x'|{* UB\?*7|sS:Y>>k?h>Y嵱&'XPϘbǽͺ"7QLxdZ#Tv \>/RJI p "  "РTLS=JNyS9u\y(P%,I9oXN%eFr,MTu_T+MU廫wä5+ B0Q>y EdGQ,ء|'akQ0O̧0iB4v-'BW%,*i:zcUSoRH<͙lx.F- 55 *Z.ԑ?KzjlG KxR,_wrn?5cb-kf!2*q)>?[|s_n%;Wӝø WCvTvIg1LJ|չ`8R2w6|D{LR]:H\)oxA\D%q&`Gi*jj,h! ,g3V)hvFYQ8)ю3 CHJ4'W$%: n nŽBm*4\7!$VJ]RiPb1OWzj}yyI{KoW8fz7+=>WR j!vZ:4BFk!V).{Y:?9}*pUW**UrR\պP1Z:&Z!eզH$(NS=Ԫ0G6r\,֕:2D@JD5A\#&GEN[+qjT6 qlP@"3p֔)ߕ*,t$+jt*C %XI0hyj \`m]B48o#L&wip BJBO)2\>ESj1B1!ɱlpc[֥dmZזZ2Ypl `#$ 2cg<#AMk1 i6BK5ptl`[q *M8r)D\!K, QTkE)jB2@:vZB\r ͧ٨4i#В&XWuc! Wf$Pda$8X)f0eTgi2’iB A#P|QsR$%-hI<4FQ+#3)KLA QFp39MSܬg>Nɓ&Ϻl5I/yfw{v]M?:0?ٯ+ݝG! tt52V%/jJʅ27C-* 3Sg/S׆JWPSp$B Д(c^(%c"7d`.TڽJw1Ju>ikʠowT[&b*ywԵseiАWT7Q#1&0IdI"#LRdYi9B``b3+m4Iw!9ƤJJNQcQgor~Eܵ 5R+^T]ZbYנkv,ʶm7,`$ AX {,*@%y"@Jj VKBEK>گ-=:+F/F`)ld*S9} V Xfi)%L*$0A"!(=xCKԄuٽM@)m3 92\YPj%XS⬑2hw &,Zx pr禍:b$7Yd_k".+]y6*_X^[[ҬSW&WL:íCPDczT{y.5mJB5WbmjT uZF Aؠc/Na+DZaFe(QYĸ&SH-wW0IݴML*Ĥ??U1+}ZnP=0zN_!Rq' WK'mnX!y/!>XMym'z~lj+Np $ȼLP< ( JD࠯^+!tMce.?9쇑klf°,IG`[HLw%,ǖcB増d%Xf͚c+*r2:G·dn[F7z+EٙIW;͑P gd:~$w1,5&npL :EW40`Jsױn+2k2>eGS`0$ЀiϰŞ D0\ް Ӌ˿iayJo&WUжU\!**U\I9m&La  %4<)l]P%懰@ #S#jnuFrѐ%&!WC&qI BwHj6<aL৺Mth,/-)[f\XKf]ӊam ('T˞/i lNo7B%81S<6 QuUK ZѺHj =Js-7#/{b7RKNdi[3 fR(ăJX)M݊تm5<5+g=Ux9ԗ 4iotdB4!ާÉ MFG*/_zX'k_yXBa(C7ST`K+ruWl5"!#ط{ʵ?~qǿ=ZbfG4uyo|J8! 1p;':R4^>r‚6Llc="T(1Uu7V={CiSIE c!ƴۭE17u[SwJF;}fh5ʹAu(Iu_J^U*J6obP Q` :0rfpk>V`7kcFshjԶ\cE P(<.BYRt2/mywğA52~h=#ta ;`6m:]}4EݬX\o\G;(.(!>$J{3!MC)np[U$rrvY˯ Q6m+eiɹTUI_Pq-i Ν_,vbUqrHYjLX' ngg!= +>!%#|tN  .V/,WvfOv |ߗB*DO,DzpDU;Jx%K@EyeooSCHn\2Ǖ{T fCJFkZ3` wN{q8'ZD:?eV-h~:ҎW3Zi.{0m3@&/\2L,`C!>xֈ6j0Y;J(sLl2x>}IN9xyz B^;abϿX\.>}}ʬ`w*np͵~橐/Gȉ^~&sR.6 h[kLVP,] P,U xf˺"rI>)\-ֻheݿ+.K$!zXߖ@BTEi~(Ny_p~_^(r5Ը2 qr=2UHo o||V$oHʚ ^tsViin6Kr&W.Sm ?p·lijW_!z TH]h!#Q}2ՐmCMcW&)s:؁[=5 PzEq|%U4 !FloyOA7v0ydDdlq$MCI)ON) #C1%8ӔHˍpfxR)3 #s8҈m ?g7VN5[x'g tKle2ث,>DI}:d_ao%_ydeRo:% FfcDKZ@#7WS30d 'fٖl w=G+E{{CU,27$wN"GnX>d(ur n*L z+D! ElAlP~v<*_ V[zb#6PkÀ;,:(*M)X]`="VqO}Fx1_'qh bxexCL!@xB!Lsrc-G-|Ά˻.~=M_w&&:s35xoAQ8nx6$Q(w7_xS_0ta_XŗOmu8ߓd>!ƒBd:p8zի$Ȥ+g02@ eMz($ ˅fxAh2`y&'ɯTMwDBOք`D8!b21 `1ITjZa3-%*qG;gu("/y{KyS=|TȰJ:.^h'lח ܴv}k׷v>vSZ[]v|;zmk#i#$pE(,yv}E*%[- Opyٟ7Na= [2Kl^[HE9;*[NL[[%Ȗ>dkKn[BO8~eKz#  2" B%f֖fK-Y@^J \gU8ҬjqԐs "ٚH6Z3&sk2&s1YVњ̭\dA ;Lf_DF cb`p ҡW̄gxL@[n9\5RdqUSFF G犫ZeL2;L/&'[;\ZܺpQr᫰e`K3o~gaג$VG ǹd%'\oӚM)Lj%4Sٵ? +tXM6&H'0[kG# Zs񴎬a +Hg\dއ`H`k׼xSMbFV5*hچ,+3ʈ̈/0^PO(oF| ;~[ht}Rsު+)rҼg's~OpK{ ܪͪԌ;KEKԘ CG2ԙFלҟe^-bM}'?=:O!vQWjĹzĩ#ou~ X_6\_-)w<;xy)b bJIk+1ƭ}v2)OxOЙA矊+Mf>29lr,a>r'=8N#29?FHH:ZNJ?y IًS< 8a/pR񫍎_tjF Dfԋ."L)b=ŧWշ{qc 3LC@vttzžH-_k T}0ÕFꠧk2eЩOi4p#> ä*OIh!0CeX#i>Ze/}x7јCe12v,f컷_a/ 6S]}XBdS vfS2CT f9 ޶2AK qY,WHK!<}NAB>k4b1ʔTdeJ_:(DtX%O}~"5SmRQ!T[:S11 4 d8˩ϙE4cԂD-IG[:*=h/Kkrڿs039tνE&A?X&´:sO1a *4̈́Ќ{n_.j,PJ;k1RIy0܁vqoPEcim&J*2/Ot0( ٔ꧊QyŹR^D$ʼ ~#E 6`Ihy͕T5dAwXL:a->ڶn # Y/]qoիJU%zUmZ+pF>p"`!C0q;J 6=a;h4V $ ֻogl%Y{Ľ)! ߟ7`*g~%ztU!H@3L"QxS8I x"8 ɐ\svbE!@j?vXL)Dn]*UrJWmWzM)!:QOK P8k(LD"q,hP]g%')M#&[,=&CŤ,topqaSDC둆'D=0ҁY̪t`Vr%vcAr DQ, ֫ ,%[cGWHDKJH)65|;,G0{^"kMzҁqc> H]Z@c!`XlA[}F{1DgprlTSaĄ2kݢrFVh|@\8OxKߣf~ztP Cҟ왮*1wUҝ8d8D!MXTKFQ[)B9j-cEڧV]KJ[={Y߰vh&fN`i}ҐCu-X񍛝UWNNYO)*a#t('_`;CPWp%'d*~^eg R8l0XS"J ^DNRe&'`FUpfRG'UR0Rl`3J].,KHkĖ'H=$F&wx<` l7ӣc1:b|Gj8]O"@XU[Zp}q\J]_a$QR:OЪ'}\cߵ[?MtzݽnYg]CD׎mI"JnT$*%1U$M{ x,  ls&s|_ަ`,Ɩ~̀lsZ,=vlVL]fpc*M:+L>׶bWn^2yA@rOH`4U %0Xa+.Ֆlg0'~EtRR ݼi2=7p>Nݯqol}O뢇 rDUvS]]"o7hװ-~ }q}zDI5#Д|Totӣ? s3_}a߹ZRKx>MpjWmymQI? ֈץ]`bp#hcIB"Z S\qrp(] w@d$lQ\!,(B8 WE#Nr7ߦ|rfI'j?&D/BoY^A-?~v.=֝qىg^.F0;B`ٺ>Ͻ!nO|Q.cop 4J](+Wo袓RhY"~kv}<"w}3m?3 ?O,B*߬gbtlJ ]0EM'{|~"\.ƞ^v@3.d!`>P|ܳ^k((^!pEI4pO="D45eN;y}/|ʦ-zpG'~\M}懲o̕bY:ksia?Jy(n(9Sdʔ$p̗!6|?^^!$v%{)S$0ܾ{Qjޝ_Ys^B`Kkd|Oav?W&e\.."9k_Qx9߭!a{G^Gۻȫ=rvص35835^v8b1?'y qpN*ϘV2uk6뼟)ǤT?qXm}x"+a͘')1^|Ʊm/EbmO/C1{X;q/l;_述p:-/\C$z"#C^häOH;g^<ؕAh5>\xJ|WP5ynyP!!(wiW0Z9#'Ly.$bB7rxB횶2e3fk\ū[[\:Fs8T\Sk^ ux9eػ+r߬Bf:*loT "y㕯t%]<ectC>M>1ۿ !j|eڵ|e\!|%4O<]ʾ[`ǭ~8By.4kuzN.bZPh)Z9CwūqJ#eNp3^`~  :>I)@BU:WEbVm<ɖs J'ɨ#A?#ЗYF%Bllp0y 209>RJFX?(*WoddBn$ I=^DZxPDLJ*] .`.dQl/096[€.@v1GBL_cł>[𺻏iy{BKzlbs}03v#ac", IFF,6Iƕt]Ő - BIS)cBc#6wZUIZZ'E#yM?_GB0BL? LIU~Y*_¬J , ވ.p ^J&Y۝%2(t{Ccڸ=.Ib)fHƫV%%2 qʺlQ^e .,E]Erh-Zdl у1$ ERh |ZҘ^3ZÜmh(Zucd#U5X%+gɭa=Rs7qz5(?kWAϫ -4=j3j)U*st///.\w)8>=,9Q|Z/Y3WDL9o4)F $yZ;d2ɑ)6^ocsi}ٺ4-%#"9jQl2B9^dyN=<4[)®'_ncy8Vۋ>e};R;އZN8g!-6#8=lчoSKQpDn̈Q1-U19 f PvUmlcl^(v14SJH!H!Cr`0 Zyh\Q=WFńiWO9ASc,j[IeLFb|&_ Ek.Ia$ 2nG6|ZGkpn UEC9F? zmf`_w)x%pq5n]^LE G:%LUQX7 pF)a`q_bz/@l42* V>!Q__ :jwfU:52#E[A W3;sן c[kayH4҃oJqF%VCd+wm5iz .6YԳ0Z/&( 0φw|ScB(Qe"uuP}m]4zTيVlRG&Ky]gng-B"52'SlX2:L]1n"܉z'1 38_rpIc#uȤP1=cwE+6B05Owӥ.;ĎWI;z&،I{#U;|{GSMttn+Fh}_~Jt%Y~nO!C?9smOΌ|V5 Fz)1fq >I+7_o$ HBo²JؑxiĢQ-7cn@M,$.f!(=|Eӈ:f!~+8ݷŪNHx=twP,^>V;]\_e*t~)zJ4-e߷ݦڙWN+],]o{j6^;> x[ }Mg?` )RFg]k`.JvzY}vVr_GtwƛOH(E.kRVL+J"ƻjv?kZ,m!ʖ4t@d\ c7D0i NKÖUΖV(3nlTFq6mA/@r $H ]KðVLLSww=rAކd@*?=7췩"@I-uSD [p+$JBmV@ZJ0{u Z#M;#Yh2%d\im8ˍ!_Dj%zWQpRZ"4N?hc} }/%~3AԈ(]= ˇ/Hlyv]?~!L񍒧Ӂ{,{_ʀ/eoDM4ѻ#-oEK똴1#2 \5R>hR>x"*zX][5,GWu^ME}2=xLEOO_wNBVgG}NxIXl53Y^s ]>3zhu.vF/!]ĄԿ͌M/iO^!B٥=*(N?Xi%z-ѺFkV]\+0^BVʮ{]o:^vnQ>M#Xf>d|L'LDxlXS2Ai[xSOY%l݉V@LuRջuy&[͟Y;wFpcn^iwopEpM >|פy<ϙĦlsni`B.p.`Cp 'ԋ!77ObcUmdmU1-!7xܦ69TFAѼl 'cF)N'3H#IˇGF'3,>)NEJ${vb!$ZVC'ۮ{;0<~3`ҎXobtDZK(1BpcƱ7׻Q3[4o͹ ^ y+P}4vXV^[JU%'@'ZAPI`7syʚ'x#dWafDqB'i,Q$7 \e$Y|N ўmڇJƯ-񮇛B_~w)dI4dޓէbTF%גmd;Q$D 1Rn,Y f0%a|aip$zJ,;p=k ڥ֦(PCTL⫡y1zv/j,n=xNز{G\㕻>[۳݋.^nlKᾬ_;܏j|!8!$tH,i;en `;:m鎀<}^(=UЧi\6x;"Z t a d'vR2c`ÇJ ^,0!RSڛ/ Ͱ{Jl\cN-/0@eZҪN'6vܓK%RC@̘_1;ˤ QPucH4N}`eQV7)<#Y)RWflQ" `R>+!_wDKeykv!OJ+'6#W K5Z;@ԇxZxj?g/$@T ʙm]Nyڊ^hi?yQti!#&S"A<11Fk&ur%>c$SBH_-;|Q=xlZ8i0*<n2VM)mLTZؘQyMᢍ:yL"gNҷ-K4:24thḆlg,ClXŊ8Q8 Ǣ hZIX$f:bօ]19zOfyGsd0ӊI\$,¢L*y= o&|syW4fcC`T+ \-nlhݜ>,$_RzhW3ܫ7W˻?m[=cBkK:yĺn}Ѳbq7IBL{Jk[Ekc_=d$V䢽AYrz}<4#Z))eI$K)0Z`p=ۇ~A4e:Lrs)AN4(< 6Ҳ+Igh 2Qgb?NOŒtFZKa11=JDN ><ͻq%Lz ZԬG, F8ZʎP1"b}(=(5 {`\VPքt`Efh* c;oI4jogM^`ba ]}5ǽO]0>=PJ0ѵjxe鯣Ԩ1[P}ȍA7yuOh @f0mjArLvY::9IpÌĮ0FK#/,P0Hgn& Ɋm[^INwz>$[%d";vCH`"_@Z0$D[l;P $n"aAܫ?_^ !5_n+'!}p$dοX7anV7dpêm2PL64e ^M^;nrօ%Lsf> uOe(P@pi*iTdx-|'LIvjZs?IZlsq*/]/#. l=A ZUY:`_5Eߺ#T7Yg7G͐M}9'7t&WGER!aAh5x#n0o83Lho]̚?66>d_pMn&)P*!NVpԗ$8h:ۃtc^Ou6w>(5gƣhOf3ӮmJYS$'ꍲLk|>TgPq[жtѮ~}\5vgry3Ǫ~X87_TG~]wW޵_]z1WyH-5oZ5Ά%cV#> TB4ZmH⟞k]|X,k |:л ÖhZyA}$}(x׎mTj6yB,T'R.{l/_؃"J .;vvd(!Vhr(ҲN=#VT[EE]/k=}]p`U *Mrn a0uf낻C.>/1nټgË'q@ J"+T0Ce{!sܙ R _!m0f\Kfnprp3^VFh,ЁY%DK8\;B\G{#l L!i  8(M@ (H8JƄ ԥ=Aṽ{vǵ<7TD;<o>C}c9|>o޾QW@.|I K.2#fӑ?fs@ĢdVTTJdDUjhTwZx!<5O&8-+5X ߗF8*E-7~:X3] SY[[ܭw,v_"3!NqSmʏ.d"}z@F%e*r^ٻF~;!AۭqirWon}wʢ.~f? 1c"ܵ}mdm}r%ض>9SɻL~829F[ !^ɇ֗o.n25Ш Y;XOZ<$p2L'{|Jɦ?Q"$S{WA;,o)"" U4 5@rƍF*h)&Z7Lp I!fyڑOwp:$ `[ }#tX"8!H0tXNn!X2 zCE!S@aOL.4A !I j~+l$WQM2ZB5o#f1D_W #JC'BnwmV.xNgo36 C>#p.tbB`…M&0c&~CCSq!ZZ3PZKOTY)hFš%G\lcN5zpH5!b,¹ңvයhm-xX;b Mّ4vzpc6E$iwH2Ɨ„/RؕC7s\6[&#q:D/\o7=(vc?n)O tRLjnќNi+M!wKf4Ի a!/D{#4Z qB@['CܿsK|n-W6bV4;}-TN`]-ۍ#a! FI( $uՋ@ q6Cݿ3)&1BR*T|Rnv"!v2k-rRڽ%pV8J~\~֗ ?!G79W;1/7Vp1__ ^[V޼oofo zMo%>X۫%^ktSw_x"WZD/e  J[ ) &:;E%Mh~{p7[}mp%@,bBd 3 pd8ENp}%z FNF֩Q,婄.Lcb .deKĈ"o;nDHD֏ 怗`% kv11M CT(d("x(b4XQD"ШQDA VѮI&`kYP ".Em2ώ_9$/p04Ih4?Vh4O7h`*L-@RFL0c409A1A[ RԠ1Vkk!*kA G:AJN:O JppDJNd%5m$V7uE0V54JCV"V5LQP7ZaEHBoZrv$P J(F<֟oMWH' e[Y,߅%^ #`8sĒۆW2y뾤ؼ kDq"J'lNBL`C[8HX\<zH0#}d0Z&c r8K)L#LO/S!oS)'fAPuD($cLZLic$掍E+!o V)Llg!|g, k)a'()S$)}Z&>E} rZ̵O× )FЊ"!Hغ/wݔbp)p UĚ+y_Ig=+)9(N0y=+F sv Ldh4ѰSJ(# Ц3Uk"`1QB,OdU@5c #+ICB'B G`ۛAVpn'ŷ$R!hjJ %V\}! ׵6T;jkJ5,2Apzbx2.VȲ AN9wZ*,QCogFIEPg¤ ez pS  V!,(Nu]:Ӡ; &5r5\;/H-k->|H "!cg6GK14U䠗pfaӇC<" R B0fRLjA/:=p AB^&T_бw#[ BL'u6bttݒ nCX 7$jL5?nLL7UAcDAXKc8CX2 a9pM)ԳFf9gl rr#3@[ָ` c6 #ƛSZᦶڙ!QqbaxL` JTc“GT382 LjG^TgB,䅛hM$G C0B2ܒ8FTsѢӒͅR n.4pM)'q{7'qv*C1FE`"Y{TFn6Em\1Cˇ%=-QK~|Xṡ|!y̅wkSQ`OJX1V7}0 n%DWCZHY#R؍4a\Z$\iFK,RDϣ@une$FJ˿. 1" W) ܹdQ~-@|oV=fz]vvM0[kIŖݷ(qP%:[uU5UE9WR7MჯaE 2u6(qKŬR4DRڅtPm[$OaH{_2AZ`A2à#p;WކLP!_kap=~jS˯K'/+8ʤ%ٔeBV.`Z {lf~|SB4Bxs3wT&5eER3s~Ӹ*vYp4֤J(d.W2o9j:`e_|URvx]>Tz$k̼^ T/:FNz^!2͋;3%#?s'tz8Aۑ8@e9Z\8 ]83֔R]vbefoUƣNX;}+6hYS??M̛cNyaQ~rwBPI9~3761qIh#^""ZCMYbN2)hrb Tlؑǎ_BXvLљ<U$Vs8εao;UdEs  Id955'_ǘuw5#2d =w8;)zN l)׶$m9^d ն(Jrj7chY~|?>xo|w|@Pyo$.6LiCKqb1g{莏LhzwPW &w13Sݔ7ʦwwѩM&P~oޛT\ӄʧ9ۙ1sz)= kڑ{0,ylǎNPkeYeDiVfιʈ6F1]Vi 4F.{*/ufJM*G:zRY`܇qJ+;4<+ǙM8)7-(6 $P"N3LdžZE5ojӍ1ТVXQlEaE'!Li ~%,$ccA fA͂d~K&i#yZ-a, F mP&m/ @ȹGN&L /@s (oqI\уƋ!53ٺM*)@2ˑI&[RL٤Z SLFt?ݍD^#W9ϸRbf@sOV9L0X!$͹֒Y2?Q  FNAY}ӔPƃ1b97j\!1XU:FWnhx~x6I^K$T|Vr7 +8RݣgMç6pgͨ:-<#v&m[İ2t׶%ٶ%|"zL1ĈJݶ^R:o8Թ>Ez\d`ju\q&U:5N}oG܃g|vNIsmu lBIDs8)i<;?c UDys{4 % :\圠/ycrO oǧ:ѲTnc/2a3WٝXEV*kLnqhIe(ڇ0D9 i:& ;juN,E-w#2ĺ"τVԿ"i&8'A6 0 S"4 'v` I=֔onZakhq)V U%?RDE`r /4\zf)  m(~oሁD,hmG2Ջg*&="cyP';F\dȩ,ok"SHW.˔*?q*wlV<{|c# bvғdB+rc2nc>7Qazޓe-E5cT'VgeVgj=| zT䚠SC;)mͿ apeW׆G~O߅Qn XQ]څOwp\DAnA;*8:(Jt'aI{!HP߾S.I [KҸÄ>$;L$]%#FIWi'1^ S@+AVt gPJO R-ZhzO3|V [ *5CHB{JLstѡN mo$Gu}AzyrmqqX4F^}myaYYrjÞً\|*A2EY)K[ /ӹ!„1MQ Ό2bEK#KFJqT.Ty l89bTYwQVx__bD*U svSν4z4_/ f^VϖoFtǯj7}b̍v-7o0$X\Kί$c͝ӓٱhb]OpK K"L(WN7?}YH_.+*gjA{Req'davSy[ǿnGpV$X-G]V3ŷo+x/&on.PSNGD,`Bȋ]wٸNBT g.?OMMT \=EXe(ᚷF tW[oJEě ~3nSZ'S)UVK Ih!S;)%A- V E\m0$,\SńvJ89SJk_9i2'MwXA°r%r Sy<\ | @D)ȅYdw*2إ (UUi0)V,g4TknJp$O_h7YVG8}^Ԉl5 5=W1!eũc[TreE!/q#'0fHҞ!W`$U"D7G;R]`ۇrf 7`t,Q Kːj ٓzNZJ0U7lCyqeUpH²0%hC琬J>j.L8V) ZSHnSJjVpr@Ø1Yb핰>'E=aL;bNG<K͒Y]rhmT_~ U/(E̶+ӳ' hV=,1ηm5)O4((J~)b'"  - .%!RNO: D<VOhV\m붞\z/`O4+WR(~l㉈vu͘,0oޮ;qe/AܞsLdF'gHYZ|FﴧC(dƓ>/(j3 o07Ֆbqq |T:=kc4KyY B'?&ip^J&QItSs$hd{jrI=hZha{x2).̀^-Paw\6_n_OJ*^\JH±$k&r*%Ed8/pi`5 925u&LUsy)x-mi9팷eUl8Z[=,YPXYFףMfd<ϧ;. k+<(6ygEZ~;rAWˏ /NgETWvB՗KUWٻ涍%WPw1y?TJNu7׮8I@"yA҉ pRY9L=b`X 7dڿVڳZ:A. Tkf5iWQ/b4=ҝj$ηv%)V>MŠQ&S%5\uhWu@ht$zowc?N8WƇ?pW &CWЍ#~*el(nṝV5Uszu+{WA9PgϏ-}|2 e&2+@[/\t0u0tcwz}B0?Y utrhN-x9tt?>lG3yMCM JrqY(q  Pm˱4jJZG? etQ\S68{ %䯔x%6&0Ѧ} > d#mp c3ĉC()#6X縛E>R)1ݤTui7ҟa61kx{9i^Hܼ`%LwSP";0q}´RC !C Dc{youoսqVYݛ]= JG&AL+)2aB1Y$u,bsQXppcS!< ys>_/Y ߊe muݾ(p{W~SE@5}eF=yͧmWz+dI :!8 -@F0t' f&` sw5zjZ?\h5` ]#ha-/#Q]ٍ;6>,R a\סjVyVx<2SkeCbrIρϣ; ]_D|VbVf9iO ~d4_;?-'(mRHNpWH3QLfT1=޻r8:1ʍQn\rbbRttd&)FۘZtYBF"dB>eZ'J)xl1MBBҎoE^o'Y(Q98{AƿZHQtb72e$B%8 hSuZlkQeHb-q(iÈaU$i#jHq5-^b&K]Sgɲ"咤1 ( A2C%J>ƝgJ ~TRIIpxSʙh柃 beS2xj-<$]N7f~S>ٶi[W6jF{|vO0d0Y0B LbBŏ}.s)pcQ1V)W01MPFHFBPJ,Zgq!պ0x:DC4D"$, qY ;BP&(#H$,I#?PrJ:.>C_L;~7F|w4hA ur@D8}D),%6* A`"( Mvzs={(@D[xHrWBF (-Qst_?QNU}H…մ3t${*$AۃI&tiYrLJh݅=vwŀ ]Q  (b]Jԕb)DgFGRH &DwynX@;;TskPc (ZvqZzYL W3<(=vt/%CfQBC"෤XI"`㏻K ӎǞ i1tjj,$[Cl Q"Bl 6f 6{=X fJlƔj)Guw3"L 1~FJuvb}x{L{4J|1X{=>t0+?Z~ GR&4BXS:R6JFo*ѡXcs-{!Z4m^~wP\k&7B'!"uY}0su:G>Fhꞩ1Ffg%hIB`.ð~:(Q!̳MIn'|v\›ͱM`_uIPuUK-.$:(vQ{lm LG+s?G^#/ 9[`}MK}$~!YayxQ!<'vo YpLSM8DИ'FrC%8I$0I*&$  `fp$pb %7I4a(E Xp#DFw)^X-e4#oDIդJlny 8YDL^d/FL0~i8]+s9ŪkPUxNb6tڀ3χ.Q|9ov>㑻%[zq#3,ja(84{|{&6Z<O"d96(%)C2*/.NDtѴf&n+Pw3v noᖏ@@&wT)C2_+ޢV8rDcjs=xr%\6 o޷ x-ì* ѵs~ 6PlMbdY=x\4F9;Vgkɡ!G\o+0[Ujo$ς?_x %*nUcrb +V6gj @vj}Ul쭏kL t[-(oY֯>`9 H>\,w~\le{$%Uϣ1&B GUϪňbi%)5ǧ'+n2&߷]_X^)gF(XR0O v?حqk-GOf2L6ࣀnZURFx|XG 7jm?8^P5j< = ;Ogi CAu_hMUEVjXp hhɋk=hjN|}z5ҨuNZ\vygqצ/g4^rU0F 07 /W[!4.O2yvzpc%dmMXjMIe(hgh;{@Dy"jqDj-XH ؘJCb& %QS]+(֗d< ^"hڶo_-pū  yA8"SyGW}?8g?3|Z}V+a ԓ[Å@Z}pVciS8Քq^kB>"etdb hFR%*TTn B"X( Mj8xjv6RR߮v: d1r` Gƙ}_tٹ? qG #xw=7h>x" coX=eq~-:KbyM>ǮQ:L!f^oA?ړl k*WNC=UC0BƎuä2[Gv>8-^ܴuuCCzTTWc]&uŠQź1N9mͺy Zֺա!O\EtJvb׺ʟѺbPFu|b8e[UC*]ܱntc2[Gvyky7oAZ:4䉫NQ;k߳Dr=q}B#S_AKG5hW-%RNɉ+3,6 ەix,c~MiM?G;k:,E:yKlSQ1O{`.#cVq%]i`g.0V+"Y^ꈎh훥3Տ@v&c_/@brK$QE0Ҋ9JbNMsmBy|ALà7Oޛi~<θO8xoCOfi5x.  `C+g*;/E䍬p{%l/{0t% 嚸drrգL=*M\yϯ}usAiS9?zCn* fK`(/ldy `~8 El|4%D٤`{'~pknV;(2_Oȟr"p]U@;-}2{+ 'rPW -y(o2!J,wk`-{W<5]y| ^ƧI4r|by{MT2z_/HjT:3쮦 Zvj}PK)DtvGgP"\tl^Hr 9K-+$#.G*&, /*bNK2 &:>fBDpkuoWOYO{A.֏Ӄ Mc<itIy<;4 vi }N(rʍPDt*QC뤯Go 0'!yF#T$3((WD=,ǚֲ\{dWLbM0{a^M"^jF $\b҈s#Z $ҖtmIucE|s>s(/ZI^6ݭ6A6dӝwD D?\`) MuX@^X , '5SSr&RM5SQb*R(5h,CtRz1 VCzpn " rc?n-Ixb,@s*,Z̬Cl /yȺͯQ4;z#_%cfqֲXItv1EI0M¤u 6ɘ$;-0s"8$d5H}Y!^s~fثky(iƒl,gΉB3mdLuqRO뻞V-W/,C9+2{eůja!0ՁW`WeuWt|I9mwe?66Ճoݯt6=K)ﺜ&V(u`1re` 'NRxPh.ʯO,_wڠ1tQ9WlM/mhWuVQCETQtKREQ#1JxFCȌV|Q{s(N*݁B5$?W.!]&V }wG]*05"۔bHI@uT8.(Dh>c5x0G ?Fר {cijNI!Yuru:!S[xdf+ IK nN~9* iZK jbRjP ^(5 IQSj5:k7c?†boB!|J#.C]6DXTq4 $6.Q1C:%p5M@^ȗ ے|Wl (`ͩTVjlŚi)4JX  _c 0I":`%HSHIȤIJ ͙D`p(f"Zl4O!܊8:)ҸJIz0U[PBAH•I֚cFRQbb"p90~epHR Eq Kc%X+TvĚkFZb#g#rYx5{嗮,ZU nZ3j]ϫP.dH#5VΌ8JV2\2QPo ]Rƒ\JВ@SL$ uC"$G=K ?My䰳;HIcm@'щQR{q /;k27-QHL2>6#Wi & A= dE of+jRL3ёCx|л˥`>l%e ba~K^ f]>̽j¸8wW\Nlן5D\+-G<$|Ehe8@@\ɚcP<=dAu 0/epػrm%K9BJ`IeZnq$gdA*m`8!X@%`Je- e-'RkԓuW73%*'kpw̏ڽr܂ruGS0#0h WP!U(|ǿLmJ*T]2Ly> 5 jgp9HLXXo`#cB5DoaێՄ=&]-=K5T1h%lwLC+nuBt<Ƨ/c H*taR5O T}@ѡxHZqvm7_ȕITdo,X7;D]wWvF5pL0٨?*q^M3{5Ur49ZU1!aS(! 1Df*<!DkL뀙rsx H$ 56UsxgDZp%8h!,nm}_~ɒ C Ǘ|3CBge؏)qdn"6GvJ^V:vsTJQ "o9?W3j$XUڙ 1 ]fur]:ep,J1,ڠ8dps8Dܭ}̒N#!ľFU3rt\ -BPgc;]IådԹ@ ,d,!0K?̓j ȟؔ}"B7wlrj>U6\cG̀WP wZe]#3>3Z@Θ ϔÒaV.UApWr :){ϲPcMp=V550O7mDqq} ۶6KA?dE1AE_O8aPX4d}o$-%cn䵝 dy0ބXpTa`}u fjVU| =n3Tz o&+il 9,%Y~`̡{W1e>V1gH I!8B^Oewd4zZʻt4@A s:TdrSb^af!qִpe 8NN@]O3`@rBGS\m:h4m~ڬщJN|e ou6VUӡ $0 @QS|NfC Tq(:e#CsP9\@-pN9UI +NJ<:K 4߷؞́@8O9RfW"8+CpPhQ&Ԯ"ZDxZrsL;+j  77Xm6_c==$YR}‡BjJcLALb:%TS:Kfhrs}'1NR2%53'PE,KPĐNSX1& 4(wK|_'MkZgZ5\Nu^lMʌss!6 ;iX9>ۜÉ/ nΐh$͌>1X3e7n nBܳZAZ#]c>JOƟ^ڀ,>cζhPls|vjE0@y:"U_zj͉^r멁˕S3xEN:Jp(c{0 ;CZ"PLc=@Gt1snh,6%G) D%TTÜJ:MxM4p?/6mҡ Ks œIP&1㻭Ǝ!YpS{;J |W):#Gdn$Lw2aᔻ ZS!dQB&X}꘡?KM"w'׭E(g/9&L63M'9G52)wr]5`;.(f>~>Ȓ$7BXJXɒN[~wn ي(K}oooU_דT!,x޲ rhE^5 -0fg8t'P;1lvgSLKPip̗ٶ޼ɪMhe2|_طAt0xIJ6b.Sb>%?\ K8CQGTtȋ$[bl#ORz$.%sALR~ݒXaLYxEp*ni/),CNǽS wU!lKu:Q+Z:g9%OUxzO( E ̝Slz)?̹?oD!)n ڹ)vzVhgfK`Lھ `70^#i_ީ^XLHKh2D44yŭw1ǖ yﶋWt IhG "zIZzdYn |g|56+87_N}n0U)0~Ofibg2nd4^9Yz#'s2yf,w9-8?~#zھ#"tcN|yI 30F.݈rB i,4g.NTo9oo͇ֈKO.Jc}VC;qm,S"ۛ&5he])MrC\Bh1 dD$qF! 2U1eB8!Rnb;BKFpKɩ(F?hu"kQÊvnM{ކf6SkvS@ + 3sL`Z a) DPZ&@PTP*K)J ڡt6ocrjlNxoOxPe[d p]AJbvYt{bi3HXL]$Jm8$)36G۹8oEb*h7F?ޘ+;-,۱ CAD1$#*0a0YBaذ&2;.FzTol݇TV~46cBg{o|-wV>|Y+ l0J\D!L9`#DQaxRFXLXq 7wsveewDV!K^@ VŃ /ߓ(y<έ(лDLlK]gr#2t0M‰niF57h Bf1J\gXf))+i$V)joSvɹ/Gđʈ/0H V;4O֪o ꂈ\wrT*ܮr0%xmr*G f)g],u"jG"8[.u(E[?OI23OZcDNpC+vF!Ea}7Řn|Ă6cV$LeaQGfqR55Jfh̪O4?XKItSH)H*)iI25)#:1 ;dIL̖4,~+Sr蠨/,T)cb#!FDj -ӔjPLZI'FB`DAHRDD+e ) NLgܓ6q T,nQkᩙd O+*$uTg9ďtv.E_|zn%5 iRSFR0yfԩ4&EpF &iJF(TJj8#PIF2ŚXQ3PpƉִ;!bQ:WE9a\GZMH]5z9'I #-5-thus(mj0 v3g/t<Ìl2V,RiJ"63 JHYjxɌeB̸©nLIP @n6ۍ; ">AHD":ۆpyCӑ^cjUas9GFE@Wڅ;K ©(Fc/1T'.VD35Z`zܭ=\~E徯n8;Ncu_nt3JeɪyWҜϓr"s;.ao$OB '"zX~dKSjSPycm6u*tcOgH9g8;8(gG%J(" 88 A s!dxӰ299&B.Lmcbz_p@c!I11t{cȐ L%s7]wInoKr=is"O1<⒌SW\~]t{[2stI'xf. I( \`95a.FI;.4-&]29w/<4/䧇A('M] +;qUG$i?)H+~>h4#q&2 w˳_qBPs;dW|t\q{pc03ҋZª+s*_{N]n%(B8`9V`'hsۙr0QXl(ᝣ]=w$8G \Nj@s9:f0D+aECsDl>|a\ۘx01qbW.BcW..;UgȌ<x\xH ieTdZ;\͕(`; lx%g M4LJ顫m'ġ'kyORҔ҉CtJH;3x%"+##$6[7I.9| Wg'_~ruvɗq'h@I{ ,IGn+$¸s]5)G݌+}{ڪqE5I :wgtNhMېP,1\Xi`W4hEX48g"VbeD*[}'t>iszMeblj 0@܏;{ЪefB1hDڈjJb& aDRųgvHB/3 a l7/ ᥨ{$uxHU"d6,*3;#CBRFn#A7 CZYZͿR ͂(Sr5BSBuPaZ(dAO `DgѢ P(|[Ճ::P9.% 45?2q)oO|Qq9QEϚCV*-AzhԑQ[5RIeӬ  UBxs]ّ}dC6j'b|šU,\jƍDL͟ϊ"h0o)_1j0b0Jqw6HQdaZO:#_9 E*QǏshS"EoصbuģC񤹆dD9֥pTS!!BzgGc:#vW2㵿Ax))[ׄ;l[}޲pB4}T~)+Ss9aoFs9>{~2 Ȅfo$O|><e}GkkTaAȇ0q="Q_G+8+Hy`n8YCJ7]T)(jqЄujh#e 7V hS8VT?*E]Y]X+XS6[fMXv6ҸW@&m*g*[Wnk:n:Nf@4ug*xct@{C^=N銚- Dok vJeDX&:$sT!.F;b4Fj<\49de!Z4.k=zD<}nAC8V _J7]Q ԮFhfItCEx\n*tA֩]tӋIj@J.8Yp+[nߵ4儔V\cA2e(oh'H5A[I"2$$Wb2Qֈm=x\Zs1;5Z$7;g*d2yE jqЄujh#Գw-ݪm*ݺg*b[Z75#-c&j==T'6F@D(Zk TmY==kM"ISR8B2Y!%8w05%rLȬ琓]+:Z4*V\U=u~>_=uuÊpȁhOI9xM,8!b9Z4b5ZI_C5[r,ZSZ \n vA5Z'E M"Հ6OCExjA-5-;H I\2HBÚrlT Pƒ+_kr:N1+q47FHH봈hƘGp&TQY9dY)B Z4* \UK^Kk\;5qCExJ _J7EqMXv6ҭGi*tCExJ\n6 b8hV*vџ(;U8v8nSPIN{.ObBԻfDn,{޽]gTtrr_?]O(t3g.߫?-Ff(Z?ϦEx>D݉0zP*٫=$ŕHriee |ZIHg֐D.IrC[\E*w ֐Ii7mh«t8;)a I]xu{R"=)P8}xg#pڻե.U=@Sه$ҷՅή۫ 5;!>|$ߥ*iQw)Dat@3.RGfَ.}8Eo#_ݘTnLb\<3R<3 , /%}5'y+Y MB3v3;z+b:oLct v|T[NSmg1WBLc:8Kmwt}fg7wzSST7:br{'.5 F քDk6%6)3R:@)gi"'aC_6 4WWG7]f/} bi܊4Q eD-"n 3~Q^H0۟S߰NݽNݾT|j?O#n~09&||(Nf7'볳Ӌo]:z3{5c߹?/.yk6'aHZrt1FQmg͟yg2oNhL @|' z5?2gYOgn>8Bsm}|sw;}uv^,*GF_5y«/@u|c7yBs(&(t|c7yB 4QQ<J VmK՚*[R*ͯﮮՕ}18_㫳Scwugqc}deݗIv_&r?=|by h;fJ T eQD!RxMhF 5Dz?n>{z% 7 <ۮ]po=Xqs緉cEV`2#Wx]A1ײZۗ o*ρ[mVO9d"h+э`j*]|;DCn(`f R^ӹDl@È6r:h⒳ܑ(Kz^Sd-9#EI,%FVT4 W9ML(=LU֢Pa!QU5 Ppwu -4Q٬[rR#=u VcZXFM}p<"qcR^i)|'D7/lR\e/aZ؀7nWƥhנX)@y)RE8ǩc O?j<2]veCTj^=_O )Q($xgJ9BK C}M ѧ^JIJuU)p2m#tu讪6?NhA_AbY&ܑˇU#v#PI(MbP1d-UfSmHJ峼;gsj@~z 3Y!O ]D6̝@foOz{*Һ{{V=gjy9Q0幅Ոw(7-Afr?ާϽ8?z[R7V38,D怃tpNg/vND}ٛD$)1f`ᯯN@mp^rB#Xlx Xpg"[2S\LE 3Вh++jnV2#°A3 \nsE;X>EcP\b/?B(`%Ԕ_`Rj] N~]>"iӍ6֜Me̾LT $o%łP].Z.:p$[UPsؚڕN*\!DBЈ f*!>BFl:o= '38ҽ(̇h Ja{Tq K˙+o=!^"6Ym⦏=ӏןx~Et;x.]\b Kc)6 _%^I֝ WU7R oh~4*&{[< F)>G]) в%pv0uKHnM˝4睧fZNm@bNO=zPixӡ:j )$FC gY-0Tjd 稴y愶vՄHy[(p,Iɨ>J*?BЕW еAAAAYjqIĄ!RQ Đ8>p78jAXH0VoHk'T]IWS,e4OQGT2vX1-0ZDg2I*B*Gp,O!_?Jg=~J۪ܨg<*BbE[mSUN׋R!z0b_Zo*4j('Uک@P9GR%b1EdCxf(Wh5 b]DBYs}.kތOgy10 .f`rH8+*/^E'8g{xP2Sgץ#gArPc ݖ<[TRM3ay5mbHmW^67191ѧ<7՘z#9Z:bvʋ(#e*8vKd=YV[!;I 89hd6Zo~)YG#S?I,3Q -9+RĤ+Ŏ16Tj>Xl &c(iΘ/jS?4׷l覗Y_Nn8M*WXڂ- m|V0&cl'+Xd t `kkO+ٕ1e+`t$#'C3*>An}2R3.zzzzP!cwy+FUT61ĪxQa݁j`ЌRcK6 m$4HiPMe鈖2H 0*RsoxDq2ԃ7\l;P-;tLE(T0EU_u]09V&.RV:F&f3-,Jń),HAU"5YWpzY[WfkVLd횥8p}&Gb&Gf[$S *qWM?LgD=y݂۟7K'WGWa&qN e7U3DgKX}+%\3<]oU@$i{G`8v쿼dJM1%a+^Cs,XTAe I{m5YoӧMhֱȻ[oũ3&?q nD]O@{0̰zV}De>)zI)S cdDv޸ b lN荊Q ERFP)Fc { |3}CgJ-L-DM<%N\Yťu~WU^Ig6XsVb5%:T&f̑ VR < RCJx[)H4Á8# %cRt)_mɍLCximq8\sTۤZ兣\ 8Nu+w.L&`b.zoz_χn|?rR|__vmk3 ӄ8L_NH%ȳlbZI07\ߕΘ{]+ׂV y! 2%-l܀ck77mv1w/vyA9j1[rbSd*uiɇPb*\Qk=F{8A3Ԣ>~ b#=E}-kYV7 1N%JtJ'r[cyG^k9{/Ϝ#zʖ}Gjl|Uv(\$M$ %)T'*.%Q)Fq_߶M8$RdƖR5d$z_IYOw5.%b{ Y\\>hh/C_&^drMq,ϧ<_bnZET,[ x x*1R;sO<*u"=q۷{mKs?n:P+IAtjjUq`jejmRUJF) I3=I߬5U$TX,tZapnV"az3ooh'X=Hz}O@ʅ DoxM, ^ԙdTK*+ ) ߚm T^0O<[vU>?< h^id4NY)_c.qmeĘ&ula)NimZ_"Ehnlp&/֢ V)C&fY<+aGmpVW*MtCfGaXovC2H Λ)F[U[%1:&b1ST@0|86\ &DwWe)Jصٹ)G,G`8#oꦷ˵TlÙNxzS9%Y.h$api%.*{#%rk8{.՘ T<ս~7}rIIՙflNyTx5X6x2ǟ2!aLE:5|x-r3SH}G,uGR"/^a& 'L8w[ &!G4VY|iSt{Յ@&['> (u޳jg"ɣF l9<[A*⽤Y c&p/#˹)3ԲIhMy8%!iT6ŰΡ'%-kD™}%y9_&^E=|صW6 [8/pTs 8|= ^"= 'FAK 07,O]"#s75ݷZ { S TbKmFcyt\^s)IIf+0ōB4.Y*`#T~Rw6g@d1ogO;e@$G5SΎAufGe%oxƚ11^so2ktRDҼ jxsdQrEFo^}XQ u8 Ɯ=G9yIh+ܨB٠6`q!SBPAXxuc=N, W=ߩFjNeNI_>hīK4-_n*y22_|ίLv!%/B| +68x =vR([Q:a&g/\; =pH.deϑ,6^aܷ,Y; ͘gMؒҌivV{aTK`\Q9kFakDZ;; M !#9N%*unKL씏$XҚ(Mb㭉0r)SzwR5<;M? ]xX0"8>"A)RHqPopJĭP>[1 5XM=Ҁ=‘i,a0*Mzߥ|p~K7/g\O<;Uoy)Cl̴"r1lf;}k"-c^{_8 Ui?idO]hv~>8mx8Lܧ]ߍWF=[?{#mA_U|8 8ɷ ZړvW9ߏgvz׮W@xeEX/ϔ*u#暀7"Nњ:NiZAn eVH[V\ȴISp*9kE*32Y9%Bk:#umFNFAiTjY[;|M^wΥtz/u='e2z>QR&g [ub~jWٵWzѬt7TaIQn>i,L$pJy[݀tԲ=ūjq|Gd +IIyUzc=˻0d-tH)+ \0"  q3PhdvbG4{g"K<x+2[y;;,EYeм!GLhT1ۨ܏>X%y`d 5*d dTvuIK7M2ur  7n F1Yi!|/EQP,U۵=D0G)&+F0$8u: ^,2ޡUT(5*L|0jCȕb3<._\Z ճu9ijYL(%sǟX#@@gMgѬ4G+y%`?ImxIsI:\:tpPG~-: F]'!F-Z"e..64h%<@M GmMαmxʉJ)_ˉv*'d`rCӜ:q ;̰QDuY&/ oN GB_726|ݗ䬧2 F}6{{.]9Z F1+4k~ţׁs",`IM>]]ɚ(h]r,+G3\>̌4ngSA*, i4ܻH EjԵѲ4Bg4:ѭjSiJi:zgno/$NEI[$$f7O&7;MZG&jLew]غw7I|L,ُ|LLEGaU}Y̰QCEtw/;I#_ TC2,cRy"8~û;ΈݻHHt1(?# &CYZ[+ QXYARbejU_kM:Eu|Q?17mzM?ȓ'穇Ha $8[ݑ셌=Z:L J%<qHs΁[G99-xwQ,_>?fr20%9Jbt"SUv`c<]8%+:q z/\C] *[2+*r|JY5|bfVT&94h'('HuD嬲Ң2RM*T) tA4 "B wwW (CvϚdȲq7h:sh޸ZJ4W9MHzKN5J߻Asߋ<"ě姆|ji41R*JRRZ' :)gO#/!f9@ rO F:*Y IZ(0=,`Ȇ=BT/U]!9+</4js|%@Q;A$&4G;yUWvAg|:ܡ3 yUWL za2E1vr{d/ Qgf@"yI9mQD9pN^t]@Gq$B#N_{h)ĵuxٮ}vهOJ|`RUD'!xow"A7>gc3}-Hk~)S+Ȝl{ƭAD.K&Z۽LJ>q=ݙJ@5}.6)N@ouT1L}g[ԅv:yo"xHba{öʽl;ߢ%{6:9,~,x؜QbuJEͧw%6cKL6oђ=%,^|,t~ҭS.m>+"3't2V)݆!-z2c!%HPC'5?<3CKH^ۜ4nA ShH*IIumC*h1?s. 5&)OR8=3cWB!mmq׻'s<՝Οk:Y!zod5ij V1g|eh*W+z2цz6%U4m cpX-- D ǍaRX!Rjqy#~w~-H+yɠL ^;,5J  ӲF7(!C"gTE}r $wEeiKd$HA,Fޛܠ[gmQQe7JogUH=QxX-Ƒ^[Nx_ҁpgq޴ (~^hBR(y͝ "6&65/?n}8GY|M&Gbčv wuQlZZF3&R"B 娬8 -t#yY݆mZ7ZJ ѪPkfY7B*H9. \y.`KDߜ jfV&[*рƌ"?,2jDQk d+BGOB M$_I]8O{ _KWso/6uwtѢ]i $`ׅGUH ):o?zwt|AY%jOlEe5x X,ymҨ]ڏ!.;4hI*3FS ƔFi2Cs1d$PĆF Б 7y1KSgװ+q94K!^YJKazBLsp0I3L(W0 A\a:.0?#-clveaO\S}x$ |7KFI闡m%m!y(SėFB }q-zk}+.b{n*Nv~?rvw+] ðxAs!aÑMd%PGHb@}xʟSedh,^/)rfltfa|j`F0˒hLxzZo)plH䡫ܥEwn~upV)]cGvqGU;¬h0G/fl49PZs:ln㕩_ͺtc+wwSEP~9.1jǯ6Aɸ"rwn\ʒhT^i9\Z90#G>k/,pk%ٟݨ.9(\yt7ﯝ[9dEHuA0~#syeOfa M0l|.\[̥KDrP$wĥEajxɜͽG6pƐFߦ^4.c0ͼdx+M"q$eE-k~:cm-X4zts>./\]Lb`h]4xލ|| Yi_Ȱɀ,zjφMn6})[*n/>ydpѺcxI9  -?. aXv\n?{5*L Ŝ1Y1@)XNuU m Mm02=(f2񹮕iLίo_)\dp.iɸ2\{ `[Bߟyޕq$2S> ! M 5Pb,o )K#98a%誮csy)gWHL+ݻm o/7ގ;j&Âe|)V.µ d5$7Q,;7uvn&%vޣR3G>R]Ջg:o޾E%D" 01KO''}X[)Xs?ԁ!SBA8zCHvk-N_N`"'LUb3s;NW[%wJZHLEj%TG0[֗3Ԉ>r#G\Ĕg/EMV͋=Bǟ}*+eSexjĎm>}6]ej{>ڲGW@ukCW@/XʮvUGp.:Sv-Py<<u`-͐=7N|̧Z 3n}fZ*{ԏZznĚ:Ě>rCcٲV4v*c1zɗ631xM^2H}Y0#NJF)AMl~%k"`Hڭzj%ؑZXl/CI[k uwm#4⡔}]Sds3v bΌ3؟do b2ßdڤ_bTzq%G |jfcfC< 685Sx!c{c.􃟖LݤSzR<6bsr.ʑ>OQJ9)=VZbiB{RȭT2Xw~zsׯf8󳊏a@3AQ;DkD)ae0,BZ5_k@ EAPrZag*,9w*7H@6i8Km(H+NY^ &ȃ{,cG ( *w^"Jl-1eϱPQô̩U"A.dv-C`*5v:F;d1$sɕ͑ LXQĝ ].s ϨeA@:> )V\}3Ur pX*`8,Bf4 ٶȤuA Y?%,IL1FJ FsF5@FѰżwd2!܆ⳳr wU55xiӯ?#x>*;87 /GOh$ ?Ur_s_ɷ;@Xp3 TPKO:|"D`¹Z F{1+vih8 h\K_`>"Ƹp5ː-?a(^L탁}lk7 ',ˍ;9V K,쳹 9hO 4{EoN}pqk_R~#pgu`: ui<ީ>[;םb (KaoR[gD]DzT9 AXoحޔuڗNϴXb>VX!r~BhbPXb>VX!/V_p~q>VXGyl~qob,X_:Z`r~16/ة_L:mH`~1uXN!aGLDpzE@ P!?ŴsnbJŔӣ`GS)/J_LŬӼ`~1+ 9.^SP/*9h V;-r8̺6r2*kpuqX)JsiJ[*%Yw? *-4F2JPzR "x`c4\(^x),~!uQtZZo׋rc}:_;0'r; @.ƐY+ +_;%D!p]8~]jլBVSVlOm=ٛjzOcY^祉'$S zo~T+PFv.P-m,MnrvBFpgl_&"!;N2vƓERk4ʁϟUzA^ǚbAI^g8qOYUt^ /5#I'A>E#!G.Q2%O*T;nT{bݺ Jޚu/hꐐ#(* %'puip]8v]U||6wd2;w_m3+ ƠlNkVaVB0wH"xx,1sogD\0&*\0. ^ f$AH1 F&aqr;^qx gx3|!* T_X<RWк\OFN(cXX 3,4 !I@T  s99!T9p ƙ\ d%' Q;. b?)cPn"T8'B #\``ʱ7Ɣs: aɪB,I a5Xi"}d4AYgPXHBL w4 K-cQB:$( Ĉ ȜQJ "!m`9əqA (8aN \/o|ArMfӫ^v̏f+8?8bg2g 8Hd`W|־zUݽTū);쥊A2(yOZ<0_(*@LY/凡j2t"D oLc&r`qsybr܂75l<@{7pA8Ml@+~ fxX"u 6(Keѽ"-Sqzu^^EOq 'G[<\}:˙Y&}Rwo~|3" |tz8#0 s3=Da@{׃,;iy}=<x!jk3_|>d\?/c|AVs￝ݜSczc ٟf~~;7\r)5 !}MR_Ug'DSSm,gH7ъٿp|1?׳oi].F׫g"$>oڙ3̆!I6W%38MdOĚ=o_wwN-{mtYGZ?n f~R_i>+F]pHO CLpilݠ4j6,Μ"PfCܙ Qeo_o(DYp]٥Lwn= 謷[fS`&o*H[;Z~d_}X&7,hrV<1S1 aC:痘c]ztV7c+Wk޲к=Flhr86Z(NX=zd{Y6~\MaqR HmaVYx-dC]PO&hGN'ـ7/ރz1_^ER _| |¹΂փsϡ-ޕ67r#ٿЗ(GGtlmή;v(b"5$Շ"%X$(D"LV2Ak0Իwy3P) Խ[;ng 8֔AxV$nG 2L?OV1yt>Wj~sW/Rh@%D!V F͐ \H yR1y5L6Yh28f'sm·4Wߍ&Ƨ+c4JXc|bcdQF}COI^S+k3̖Q0:_z4E<˒gbĮYAQ5Z}^{sAdNÌKrq1")Cw'~V_cjŽ[}'dd8i v_V_b?CE"ثs1b4Y/䗣 (s_O'ݟL̵ O(S}1֬ >[Evqsx}__.71CWJju~xqEՍxo(R\'>_ɞp;-NM@֢Ebx%x7X>[ WdLPilh|][MRG/볝-/;ߎ6ff Őj+U;sPגsz͵(zz´Kf ~MI#e27GР8)| [?3 g ^i/-4 ,`Sb+absE|phrA0b9+`C0zb5 4}Ы4BonC׿,+,.fՁk%XI9kگ%$- 8MaL`\ f\q : Ec&ŏ wUmdP`l32ӰF[CzVVL=> 2(L,վ<,hشQHʞdZjSǴ R>3UtEA]+JՁPSP_& +~t@eA[\ѓVtlρDO^. )z3>Xv #O/%3wak\ofoѕ'oǷF==vP6'PE%[#{yIݍC]F}kF[{y~wqY^)4;L[Gz8_+3dHͭ2l5L9X*Zr79q×2xHo6Don bf9ykpnH&Pt\rŵ?+ woyAчɵGS vl $}QP9f4f\UbϜvt2omA%M]mJ/܂f5s9}`.=81Hf15AgFWpV\_)$t=& VԖM&VǫqW~$Ƙi_0qhEC,w<0m >5#\xS- hjZE_y2D~8+zJz5| d12㻯.h{8arҳc Rj[a"X ʴ8^Rhc|!sje+Tʮt9i"t`JXf 9V{I <xQ_]Պ/]c)lA #w҇\c $%<¦/mVgrHgw%yJSۀ8|r!:=8 A˷i2>=ӗ}(4nC됄)\}y89AtFK "U'6K`f|Oq0ĵl=W^=p8kNLCs:65j'V7Ӷ5lƣk28QEt>^EDӟJ%@ rQ{*n \TH߱ʨ A Df"RKBw+i(4ʹBjgl TbsN*?*bDJ΁aAk89r+n5,9^S@>6,p&1]wE4<5*,ffKNhb<\!#NMWt#DoJNB\5@Mo0XϾN>t[P0jSHG)"^1*]\R<L"K$t`y̦Zj rs̍ y%|Ia `b#G`!"r%a,9"$ [I[˃\˟wfq,=7pbsao3˹w3LH>Jcs;n>{k{c,Y4ÿRG:EGLnׂT]>;?} l?~nw) 0%Tj{a%ә dLZYxaKuٞg;Y HyԒO<ϓRR?GC&=+"8~ ObC4Myݷ")0\4HHPbsZd%VKť^zCXAisꅶXmVHbC$d<{(c&;0e#-1j(7^3G jT"E].'HPQG,?=p{Ú ,yVͽCLk}g*L3ͪoa4/-N]v~İۃ LI`l#h9U\67#C)ث3KL8]s7CʨLrV^Sr+lwɑ x$_NĐ"X̻KRgsZ\c1?Tn@\rDFRF74c)$3m@9eƩʠ܇VŎLi`2hxh9v:O"h3^Y8rglEhOwVF}6hguNK"L;-%L' UI3B^K}NFg!W+yC(H#Z.YLqyo= C|"W8a W6HrHМ8bg#D}5QEH .e$pF:0$)f("$AitĈrG@$F ۉNBc:OS]nn_QvBB(Q_cr.s_ƚm):ݍ_q(DD^vib r5주iutϞg<ۉ3- L#Xq4>M&ClU9!Tb,U&5V&&RYMvU?IxRQ]Esv?2=d;AҔ;`\[۞ć\QRԞgI}|xJ$ nlk0lŅ'3;i]v~QonEyP.2Ms1k׼@9B̈́S{uR')]*#in#Nqۈ9"/#LkDP4V#Fh2 ǂ9Α Iڕ4HVBK dkOirdO3S(`Cځ-js1QPe+ KD`/.k/C5IFq]WOMPExթ]\";0'< Ƒ25ˌt<B)z=W#vՙ>+-PX83gyN>v9u)DʮsBG%E-K&/;(xE~(gN,O7Roy)TZLE.S۱e1}ܱ5y\ɔO)^Yw`:5Ɵڞ r@r礼r`vzVqE) re<[l ?)[F1 ϮG(#`JI T+?< p of`Ѽk"5Ě6Yh2_k*{|Z %F8mW =9Ͷ֌b:>;9Aa 4WX?+odfy(Ǜ#ԾB"+ Q Fb~:_|"/^T 4H`T.%VS 6N-JΜ7iR&ρkqE,VjW~A3 d^X-j==,KnX%B$JEyr)2{Z +Mؕ~t7Iv Oz4 1OcfCcЯA:9\P̅SцٹBt2֞FcТ/8dPqI[^"t |:j4w]R_+N)`jh NYή1JN=YSθ ml6p;vKwr[ǯ};?7- }np4NKㄻ4NK&R@RAsL͛ H7Ƽ8@Ue%@r*EA@E,.0$_P̐{qª'qª7¢p)TJ HԌtGZ)ArWSbV V8PAX 7zMjj/<շ-FCk}0N8E )’jWi 0TWJ;Ix$ R7܁Baf$ ʴpXPO7 _@Dp'u"Cn[ 1 uF5OMSֽԎԎԎԎRx0Y NFCswR'ǝywkiZKK^mɪm)*nK+z*DܶѲLBEԢ9!f˔́tΡdibYDI-;T璳eza@2*5TjY)(e29킳L0#+*C!5,MJȉ?]^ȷL@,HOex _L\JGyËf)e!-w+ANf}xKH%ʽ(׆<$ȸέnƩ6MmjX$IOUU/g5|֫%oi=1֯x7 W\۳ :`..0{ag@țO7vvl~7-BvL6T0󰟲@w2KLAE@uTpЕJ%c>H"c 4b!|G9910[c"\0ELẅKcVケDvLўp q DDI'PPk#8[މnۃkdeĥyc@U'$m!q(ԍ v8*[ C/re=v;P8ta^>%9ϼ%Dbhi`B$WkP -]3&3zLFF`h iqF[wb>䮴䵄.UB}@zsL'*ᵱ! ,f}3JQY'G&>Od&!";% %~L>{1mݜ%W. /}lUMN t(t["_]Ptq,dqJUB@KT7"Sh\ƱN!SZ@ GڮS0Aʔ=5bVe:RQ7+-^|X|1^lWK2 !!!jɗ=MpapԚ& 'V'!!~Nv>P9p*K4MarN즓:Nt)*䭶g0/ndQo P6w#Zu5>Qx;*ŮX"R=Ex׫gѶ0kֳfm's9\{}eίuy0("72'&c7{ԤItu?Yd2F@(<[_6HV?t>Ǜ O,b|y??^]z4eg9'KV}s ʹOG-G Ɩqߊ3 _#|K& /<6iij!>$ -ʿ^F { z%`D8hҎ8<JT_ò" @c9[.Dv$4|u:)۴IC ('L*X&* $ll:v%\^ >kRnӬ1i x5g}ؓ65<m.mNCOG]W\ _ 񏛒RFhb#!T@IJЉyY1%qUTxQ0W=1F}Egs`-#+># EH$;bͺB1XRNmFO֭V{uBBs-fձF֍Kq[! cbf ge[Bc$;ѝe_ڕOew&鹻\Oo>v\*;_Li3!|1YH.͟޿oEN'qp~*|=V_2-z,܍Hov"kP/'KUŰ៩<(( ? {Vג/_=1?xvɧ\`Xgjfqz ST{~ X8?KVƋXr=p ٫uѓʥ27=٧|[R+hPQ~DIz&p=tTWC\qqqqS4=1P T  BIYd; GMT%E'Q!{.Tst𝥙 %\~4gX,(^IF{*3rcUdI m>"]+oEfY&c,gQ_t)U.S^$-_`\3̸1ehhb*HmdjwLWv46XO`& f*au*h0 1jEFkRjQ&W~ 1FDt ݢ$M+ AbAc.c2ҸNs'`6nʷjS.ڴ=`mUgXmţ.3-6@-($]{:uL|$YkH8\FM 1qZqnFf}e)H)+!d}A3حN+LN@Rao=S*E37M\D&@ VZ u@ fqlʦȌy 1{ȖP DQx~㺄oP%Ɂ!fe#Y+{0S<76B_iٲdcΟ;7O#e\ MLJŞOsNAZWZΡJ^)w Xz#ppsAKKWq A;6-˥aG-iv|)f}ɬE ޲TܰK8w52Co"e`)_/Mwg T-:tUJi]Q:FxSRe«t!!߹}zݺ)&º Kt he}% ?V\Jk:\Ddb"N*kKIȫ7lje9mTҴ @ʴR1p-.5Hw.MoH.`znDA7e orͮv:qCf>ӳ:[ U_= +ג/"6)ܯLiE)/aK:+MXY'],.#Zg:᪻{; kT/[} qRm-.]W :`J5Չ-uCOJkhPk1q a_ssd i uuzVrQgjKݪx“3kN9+5YG eQ#bHaD3S!^6BXHilec!+EDŽ^\!Y)}fJkV%EuU w?X+nyo&f>o\T[75g3~yuL^gSWo6 \KFE;VovRs> 65UD7 %|p)skC^9S). |oDL0V%lزTXXARK+8u8|az7{ )& /I#P<|vjzK5CPU@W8s?|ċ 5B䊘F_nI,8^Dz7v^0|zT]P!Ԃ$#EOSA%J $hXb* + Zq1S&i}716yݬGAw\PyIwxxE ~ E `''f{Q$:Q_Hureu0ܢke1{Tr5yjY l+4ԛFf@P^ʼnP82yUNUu1Ѩ~cFi.FNLjT2FTK*e1m bŕ -=y@k݄'jzW}`Ud<FM\Z\Wq3s NxŽg71C>SμyR?~E{XP;;S&7wlTR0T* \Y(M9xZL:z7WZgUBC-$d`&-~dG̣=R QJUhsOUMaO5&g9Xd6 q|HdG RMv  P~1Mj 'ُs/4fofWrkvHx:,_;$" ǩs T1l~4ĘGhQk"u td)d>v."ZӄvQߕS:W/Ng閁W AHA%gch\ w{ 3)ot&qDDcEPpz$^ S,Lg匹aq&z`_H(x/{jwJE9fD=I%A'/[WK;Lj%! >ek^ǘ~̉"eoEEǿbN4$}3mg?xy: u@.F0qa.Q )$s bJ쬵cDaa0V J_q=?TDy~3x/z3X`m9ELѿ/?þޔ[2\e.XYu&ueX`kXS:)~) PYpCJ#x(8F{IFR0^* 喣-hIb#A  + '8+_RQxx+lɂc!Hj)!zExtg~IJ;"YQߕd.؇%&*-*V"X&w{WPI`P|Z .@?mg+/-6Qo?]mh2-A %`*AWܾ#FH `"L`?/~!Tc>`Q9W cŅ/PWE [\!ő4%wנ aEɑ;~-6 _k!IMU PF:*(RY/ vZH:wo~BU/-ZO y8 HBv9c"' 9"ɐg0Ҡ3$' 9BN3xAȁE Dvuh>L[5ZpjF ĸ0 Q T#aU8(oL`.2;lrc4AP&Agv+&"'XQs J 7?(? _7='f&>Y3ΘRӌ8e`Z4]-G5kWL;YLmGd9AHՆz+a%\_os C)n$SWCh2gJ$-XA_w?d͌(gL1r-`N=X7"Y "y-d]T93jO0UHaѦl]Qa,ׯ ׋|u Gx~;!x%\ Xk1^41 *))`BBI FB4Nk#/7N6Get^b8.?] iX7,v)x3J3K=X &Q'MӉ b=&f4ICT LQ0&h!Ao5㰼tu9^ELDY`r m/6NB c$Ԛq]GʀђaW߽#wq̇}.sܘc]~_|؄rHڞ׷3@iMLu̿ɢ9~$3|n&m{ly~wIGlys@}c^7w´l>\0u:eJWNI=xlsX[7iYlV>g<"Mw.G~68foUlng >( ew0e+XB 4A#/J~i]hzM/WˍCQWf!%L_.+ԡgd4çWݯ7b.;\d<^nu'/Wf*k# rD`#*e`/% V`TE7dL?l,( qX',ܕ18f9StuE,>Fۜ*ZፉĩK^X b"&7o& *ʀ|LcFE@C ezʑ_1,!dUq}LCN&y j{V =Ŗdjm՗CbXb @Zd2/Ro?iy][zJ}rH}X&Y oH](O;ma~:&9y=CGn>)v) ,νv68I#@nqJ3APz>:5A=%N7vXzr:O(r5T:D@s:]lZx*s4㎳lLQKax^}{5Ϛyxóf a'HVc>e3FvU % JʿJ~jcmUwM4u32XW R+SRx2cnv-xSF^F]hrsP3 T$F%F[O~$v&:=1&UFK@x'R1xG$P)t9!JS1C Ƞ[)+Fօ.t`ByW/V?/~3|a#J KElZPL;[]G:9zE~Ud%̦i*)bGb$PfF`fo wm3hsb;ܽJYZQcv,[ƻqpPbfv2svVsΙ%EԳ墕"@}_D5h-sq%H".bAuI1LfV?vċ;ZEzߤUh4>s] ռ[׮zcX1O D_!F5(o g_1PwYMefğ g]{n @֜Z;l {ٴۗbŶQ}W[Joی$؎}>rb?4B~f;jE!ؔ[ݒPO&'J@YRTĬ9TX rkܚ ݦNTH*T"(  P q{fe2) <3 d6y:Ͱ9Qd&R0قIk5?VR$ &4DL9iѳČY+U&8J% 08J5N6' VYh*v"fJlg⌞)gH݊ŗF,"#Zj% O Rn'8'1'R#f.-%fwZ9$zM턉J9RSwg`<:S;+K0A:8>lcE"!JLÌeZ JɦkgR(3Ǥ"$kUT.kdEWOfoPڥ3ZLl3{TF[ֈv*ĕڍZuSHkErHU.6j$|WMFyt·6_Du1ZtNj%_}Ve*He7.wNF)Bړ^x9]/ޘ Ը{zx.VޟT> YWb59HZS21_ž@5ʾL㢾?_v0rIB:R':J;򻗹j[͘T_->u0m/*fścc;O'jC4.R eamE !9ܾ|3mD =$>[wvhM#O"l/E=qugƄ1aj'JTgI nN;X!)e橕rҒOF_PZYl_#VF9E iĪF0Fv'APvb(%gEM*@)/䕍Cy~؅CE1 H8#EyQPRI5+5LZGPVLnv~|ِr.Bm0UI_!*_mU_٣ @PΞ&uYj\zl7ٌ Rś"'BD\M(A"*l D/0(aɘo2[!nQwFZ8}oQoy#˔xveJL Kcd=)'U{+<>pJ/N@B3 J%<G':BȭxxXŲJor`T??^~_Msos7]>?|{_^\/o)Dbzn"O%~Feu#vMK՛jo8|nM. 63]5VI|zO굼`g>32lxbޣ㐶E>rCVC꟧Ośp% jG1|[/%8vJM,nP/w|u1ܾ8.5n~i1IzlPƐ-o}*?_וZ~yD?||E< :S9{[^_qk?6 /?޼x+l|*?ڕi?~;H͓`eOߨ'򼭾>2w7Jio=5ê7vwזwWr Y})ߙ'IL=LՠxͻnL$'46c## qrZ-rnaS#-bB}ca{1ȉ<!pz?EO4˶qUsڦ-EӞnaۉ^;d7叓<(MD/Yx;x^mxσʹO5?k6d޸&^%f"8io2@dC%\;qiq'Y л5r{=ҲQD7O x }J,_;B}TY.-nUT5iM' 1  l8f5ﭼށs5]ѭG2+> JkXeZUp,:0+[/-z#'Uhرף!p ]-sT~}i(`xn) 9څ=|p3_!N\^:&,-?|$1jV,qp-w_|ԝ0ȝ(f,v̟}CD=y‡O߿wQ#RR [1/<\XB.پҁc3=rH Ňzrgm>+*tpZIEJU_ﴏH=n"DXE? )$H/\,`<כ›_W9bfڑ[fa0a%PC߹WWWo'̞C=6 0hFB1ʪ1;ZrBWɄA :=)jƴmQ #&M }tOz=Gڂr]FCVSNS3GR9@!O:o53·:=$M՘P6CeȵSJo_Rʪ*"T/J-*Z~u+JU=&m]ѵBd岲"ȭTp yM vIa-+Ҋދ:Ф{S{-x;dSO>[ tBA˳|9'ʆMMz+R) ]K،ӏC>{rɘAAi1'<+3#=>_TwbBW? >0VH¬7qChg g}*☦\\KIBކZAbP߼ZgvH/~j{1wf_z!)ѦF)xY̪V7`[d20"l0<(n`#ݥ͖kO>tIx}1hkWkxi8߼sv5߽>?D_+U0^p,"4r֎6o�)!r3 "X,v9o^=`G${5Q`S*#)[y>O#qK` pbəWCf<S]0Hg,ku5W_7VhQ ;optWg O@荾7] E.%4NTSZsjV1ϣp (sbJO*q:ɶG -xȣ7`y Ȣ1bm;d"Eы$Mm'&(beBLX,8mނ,Y g) dBE7#Tjlvd˱\9IR:n&xfH%yOCI FX ڡ (*3"0 P =fTGJbLOWCiK&D|iKP@*ujO[B Ѻ6Z BFj2UVAb[#0@tgLNY>F10Us!! |3YUu|(_o]|>ȩV_GBs V %mh \Nܿa( jPţ>7Faހd _\|BI:cizsv95.2|CUVvj.dz[iJ7Wϋ \ר!G ʩ@NZقAM5k\9C\x‹^6p#lĚ/^5q!R+ ه~ .~@CXV}Ny$:_s7PWѾ xK =[迃r遈@tow-Kgn z=l±fm=,g,O`ְې _^@9gt'!0HE .R`p JCxDbBcb71oy2' oh<" ,-UY8*99*+NlIǃF]L7аXk@!G8isf%%9c?%K="Jͪ=E |+u˲Gx l?oP $Q+ގ+kIF9m=ck2Uf 2kYZJ&;̟Sh$F8=39{KS Ȇ P$Ӕ.}}VIȾK DX;\-]=*, %#Zؚ(uHWGn.8|>.R(ԢQLqwSK0Bm !ၓ%KUB}yr,x!ћW;Gg>n'eܴr~/Vt|#ۅt 69͟awm~§-furq]>ӿp]Ǭe7XБExDi<`|K\?@& F !g]4YDL3O,|&4y  I/\&g 8?_NunY&YQ)(/GEb"wvXp¼e$MH Wp< O/ L㲀!]0sTah_0,`#J|38'QYU}6IoK2pO+A|t$/߷+RT'yE&81PmY0bJy*G( Ze~:yÞg)Ӎıe`5of[WHۡ! v+<-ak>\\땾,5/HƗ7 ,K̖.ue\hX)[_4j8*엾<.ۜN{I sHq/PfރwpW^CNV[ֆb0jBb79hԱb6'c$DKł+iJvGBHma(5|kμJ kvoW\[DžW|e-PtGRǡW]=dB2Q  0Қ-*-y cy ROH(ϕwLRxUn=GVe:,z["t.n>riQ*UM*a3~]$r4 #;=J}i%yi:/@)]dk:.'=> 뜖,BB#4!8qSHHޯ@he&oG#0+s AXsIG%">ȩE(`F`j(_9Je> >`ñEe6J+tPM1c<ç_L$8 4}MFM{B qzќŎ҃򌧷7cRNWO1No9Dqa/Jsإ,G]~v5щ`*2ʅ6|5ZYl}(@)C]pdTk)iVo@LU(X2`K*A$$JƭBs8SHINWlf3#Ub1*P"+4Bsr .ph33 >~:N}$ZTӻO%A]7kQSgH7tzPt{u{q9ʄ_!>B.6xQZ1|AyV>2W_~*A aGjcåҎ1>M<7VW6P1'+GT$ 6ξg7en:p":^[1^?u> )Cl*帬*upڳ8-h08 תf$.(8PK+9ZAJ@86p~58wmIn.Kbq8 EYdl WMF%΃=cHUkDČN]kǮRpkpkK:ZZ#L9[ :}5aG3)!$}H%"gMi¸*5_v-!hUHw"taW%)Җm> O:9|8d 2sq: ߜu"Bׁ6oY.A7s=Z Ve: 6u66ؔ95__Fq* c9EE")QV>؊'o^N0#qR Cn.Ζ;DHz1Bf~1a(\~R?8B8c+;ل̠T"=a vu{r+;:< RY2[ ۚy SEl 3M4y-=rzS0(&FbY 6-<=~!'{x(,mb Ͷ'rJdН sb +id%p73uƵ6MIU35_'UWOHyg"N4JdU{T57 uak6C~U0 \%d ~-x'g,坓ݳmV5a -?}De(r} AN$ Ҩ\dB|Ƶ7UЭ:- -ˏϦAK  YBE~  j ud=knv~65`lw`]1/\wG'a븼nTLbu:]GE ]}a2{}ͪU/>Z \[hyu:WbяFɂr>ɉ8NYhC‹3T%\Tpn7o??-~jZ6 'x?O}<ڛϏ滣TGxώqKAl\Ia-)Γw)"-RX#2EF_ 8iF t0]|'iNZʸsɹɦ;c99UOÌ ]B')Kkل6f@s C@2@(w7a_;C@/\7Cu2eA_5tPtհgH L g$E_HYYkՐ-xv![( (jWܐ-eʞ2Mi $2jpiΊ>Nz{L&șJkMk:Wl"]> LF\o!]]]W: ]oGm%퉓L"m%K+;H]9P3[KpMI.,ˠ;wzIJ8KrQQbb\Ŷxt:|@<6{Zin=,9,r ctW1 +y5ks!%[A\M.^BgDn[5 n\V "X =tU23\j0 CxP3/3璈U,ūX̢\>}.#bꁒoFouԼ`WS)V9J1fZKLlm% AJHq ܆(L'eTП  c T-l^xg< CdOs:Pĥ+Ou[L;/P#4Vh-JB\=Zl~<ݙ]v3"dлcyp։ SkOⰧ u(Q FJs؁ smν ?{a.hcdHE':[yֈOFEӍ03]H0Iy+ff)T ±0&) /2OfaA๔赤i{gBVҶsd9} "yXV1d\a8:=h{Ӵ-h@XOf:x ,R9Nk#4[m|*5a O!62i{ t6+X9bS ցIae@q) y!M@~F9 ĢRUBrL`rGDhV|=EX~NSѣ}8{Ԉ-Nd7ѵgf\4xb@R龐|PS>4zU|Dj1x4,\f8΄f&oy^e[pwʂwBþbMT!- ݳk\)9.ktq9:<ѥ@ vPq;hYE^3)E8U}V-UKAJy%C|'(J @I_c(lL`Z2T`4oo)yn Bu̢ΥYp`&g+F8&j64@F#_YVJW-i< 抢g^ͱ-Z7Ǣo%-E32mFKÊB>3 I:BwtK3 b,);I !!Z>z/>_wG4ΖW**ZQ珞=}Ȉi@"c:aH^밤ΣGW$|.-|#^Eo&9A -E}PV*έ8((Ƀ,)2= Jv_0F=1xx4H#׼ N\u" Q^  H8t$"i fRU]r4~aV*"h/|{L.4˓h,/'>^ p/8i#Gc ty>|0ы^D$J *I(cxTZ2R„?#Sj⏻(MVF_A6P_,Fl2 Z h+MIx{Uu'O!?؀4Ÿ)b't\\:xfz#d& vfk@̓DfHÕ3|ZP`ه6!;T/|I *f̃cKqbEƪEYy 0 ]*)C4ZѤIQeƁ%RwPj\Gpd,WT#&ư3ƀݬ >jL9f$F#\dLmP'xȕs_ w ،U ;LJc$DyOU` ,~#xy-6KccV AAqAqjZulmP'xMqLMMr2?MÛ}Rtfӳo:l ^fIϭ}yFuͽ͍æ0^Wp@X%p(s8a*Sco1 \yA{ )`-]$]o\3AP;F3ZC_{[&rL=m&|怉)9! ކ?e렚=yVu!Yٙe1![jU=j5'6!zE0ܔ%t; PF5t5{/xzyx_~ŋyuyMu =罗=|ǝBHWzߞL+X?L'˛Os|;:?8:~\=~2}ܙ7 BGG?Mfv>f~'qqB:eԜ .|h'Wˬ>8_]Vq ϫ:,1M)`Ë$~W}rjO@/L'`d;N̏+Hc<"#3 cFfA)CrIJ$od5n6:xlP]}TUFQ] w?_@n1]\yC"V|\+/%qT7o~Jgߝ _˘ZZ>z=qv]}k1/zT{?9Mt>.ٮxՓ7>]d/c]x4H)PI|'kJtBo4[(tn'tO7JoRj'9O sxA'?BL ]|Vک{Gg|%zD,NSݓShƗܱZ>)揾D==yFҳ?/gэL>.ũ,>~qtzp$_~=:|ut4%L?ҙN?*988LчF_dڎ8Zl81}/O)s|sBn>#=뗇a߼Wdj_ۿjkدfkghjqz\ܝ}8˔Bϳ3̙F{Bk: +﯀xN[Y(]*`l0c\vJ/r{na [ӯl zBLNy<>e+6QzNXZ=YȨ 9Gxp#|LP Pp#`J :DNV\ATں yD3p9;ÁJ` w*A6A "syTIrfQ Z)q'qā#qG>% ,r0*"T,$)2-T_+Wi!P@HiPjg;O6N9 Do7= ͺJ9踮~Y\Kk}y5 ˳M}A{GN$mH$w$m[qA&cJa vq*4;:Ѿ2/alz~+Cϛk &iLn.|l.9X}0bifabF2Cj{ە볉9%Gaq(Cv|0Mq^X=h3z^=/ރ;JYҡvt/j;.p)8u%X1rH@RC6V 4Z{$l=/ԏ;z^1 ĝm, YbJ{Ab” RRц*hcd" HП|`u*QOcbcLl1F82Jo#a/>x5`'zJf n06 Q2'h#c'3y &J^ki|9/K9.5 ^< S]V' &Yuv _eoWxw޼|΋{;?ns$W)oҗ/c8:lV0eh{tgn됮zzLj^ZV{>{]c7ZԣIKNI(SBlx1i)E+w̡&dv?o f%dy7 '2W&snޡ-:/^^`9H'N}?2RϷ jl,)] 0FF?֖^-7}/+O k.xilV>WrLrW3wp}hZV9($5'R oU$lDj7,8m&ap9tI%O)0$ibQr}6ybv$8Y%qWD^#B#Pcb:v=ztD'`,pJ/[r!;N0pyꃸia4~.c=++)<'*dLy5&?#y }d\sޤ #W% vF{W) %uzyzn@_K!%Rh6!}6:DLkzL<K=*'Q7?7Jx&% :MhχzNۨ?X ^TM!ᒈV"d !B],uMZ4sԤ ; $ɋ.qB3KYd衄 t>b$Eh'[hF$ P0s!~猑 ,UKL8!@+a2 Z)`1[,Q*JaPg1hÄ́dOЉA,Q&'B̈́T`[&ssR;bkQn3N4G\ArQr/B>txP&2NZA.`=ԑ"L87nu ٵbޮͬ2&o\fjw rȅ? ]@5?cy*|Bkj&?_(j pzUF:>\SJ='V20+{lrᅲTՀtZ)Z+>\םkvoCR8RIGkOO{KՊAyC[,7l pjM7c~Zm0UOE:wHն*%@S cy 2\s+~}[ mo.3WXzC 'cy4[ L!!0We<>'oQhQ=gۭY̿.Mq$/kQ^nZ>gEC+TR1{Dzm6cl dNڷ՝7틅Җ7+ iN }\ѥ}kBB둬1;C Q28v┪m2K{_┭[RzLåzF]Uόq52>c)kk!)[)I3,5.kQZ޴}qJl`q ~>)Fhw/mV)ݹw' #~FtZa 22LaĠs}[rʠF* b]Af(3i&iMT&@ޡW&WD\yh9 EIO !(x)3v }dFiŬE0,][QX8UXn=oO7Of6 ;pts&4bt;K `"Vg5Ztο15f_ήTl WbM?ld?ݼ\ޫ~cqs o1k$qN~NN_w>)[\ҰjG{!kKMHS1Pս_6?9uWT+ LjIcH_N(y"ucKf]-!tׁkЬ"ӥg{J [3; T.A FR"$C"˃ -]LI&zB·盎.҃cnh[L(ε&J3H+ -EPfpҢ` ~$+ѶDµ& AvpFU'UHRf[ 4t,k+Ufp$pgBIS*39Z)K?rVq:%q-g^wIuy[i\1Qs6^&TĴ[J|l1`sR/9!+o^ DZ$-GBC[e7$n\H @IL57Wzs+Ԩ0 *\ܸ}51-4B™24ɠArhwLe6^ĘL B(YM .EwZxA;cYẠ!XV!^~C%G ,'AIc૬%9y*M$$@TYZl1p&/@IZF3:'C!1]D[Fl1(n3'u%)>?aD=y=$#"+ǽ+%k8B_ n~TC?.pI ;?\ G×KcIΒ3;#*^I,r9UzuUuLQڥ&S })&@ )T5qoOR hX}jPErSMrVL@I&fD% '%C&KC,xԎu֤8mTy7QӺnt7 AQ\SdFEVP^UB)G{gmQ`[C{+$A7 Rhi&%QcJcxA  ڙuGgRGG^Ld$5[iwY@ @>2ezz((gS(MCIzg7|(pȺs(<P JKHD9.\vyEHR;(g%R9E,!bg2!~1(oP %ZAG[Nq$riGVx2>SacIN8GkSObх$bA)}Y8Mn.˰a[Ty/t3du ,5\.;nBPj#\QڻBqA?d{0]1W%C鞛QֹiBqӧatUxwZ%ru#"3 12;G~YQi<{·z/K?JK;SPն  f%*M,)(bXzVw/0,/0ЦNYky/嚹]t'*݁ w5^]vigF_v&v3?qKj;R@x%itG0Ŝ5^~j[* Ѥ xr.=ktDbH*!UQR1Q mF$$ YԴQR&6&̈Dk3cͣKJ Jh70gc'G8xx9_dB2EQ:*^cWm)l|D̤;]PM&X|8*e$P$qa$#n-_:r{@ANjĪ+ 1W2}Vڜ噾Aid&))<5Ĥ0Ôx#ݖH;0V 7 9o# ^X|a n/YynW o8ēt懓lz/#1ٓvI\c`ȎZpRj {>;@RTiRT&FgWP_4E&~ӶKuc]/⮷3LSBAsAXt͵D$d/9)os= >9Njhs]r5Du7}-UB dKʹ|EN=/@)Cyշ6DB?b{:#܁--*i~4{nnaZWrY{L$j],?}}J?x}JggױmoO}+s~ڟӏ>9\Jie3QgzWR^g^f9?k %MϷ\+Ɲ_ٹRDGz5˿_hq|?;{2qlNMOR U&4nz Gg4ێIco?~5{ࣵmb7'9plwu8_^oqsE:^_QcqYR36yJQ; d.H~cXׇׯ>dyKlHq 皡>`06xeTza@٩ZIV=%d;8p%p+,MՇ)>TO5:f:Hje~OIc}OZW4Ga}8WfYhGt]g& 7}&cxɗϻjM]{;ַt~V>ibFk8ͮy˖-8;je^nڽ˺]@Qu1Berg-YeߩQ֎ ̛+VeE3*Gkt c:h%.Ci8$ BmTձBP7ۧF޶ϳڇn(W?d̖w|ق7|ߴoi]\OɅ1(H*VukKP TW=+YfI~w3V9S]׆}E63]dYDyP){3ߊOS̻ }j-g) z5^Osޡ|!T1=XÝR1;]`6nOp8e뛾@k-~g6VRşY@]if0y]n}[ןzc\VGSnM{T>{×q1t\q UGx?~>ϛj_HJTd6\wQ; @t0 خwU0vh*"4{X]=z(LýG*>2pVq !I8r1QAHgv :gm0:.D|ܗ\Ogz#_1 XE@]؇dys`Pgڗ$KonDd)/6Y}dXUau9$ZACnerQp3dOk%kt0۲(KSM6Mv'EcVxwk. \cD)!AP"SV4TS-"fM{*;0#/ʥ cr5f1b"JIcy JFo͎]#)uosa)yǤ5jE<yL"pF2Uꛡg35s>+^4Cg(@LCRuHh¬ApV'Du8δׂщ8SJs ],3kz_61fjEw1V!(Ҭj uD3`h#A')ںVI!If҂xn2]ƼqXgiz}yw([F\<%d/p yѳ'm+A%QPA~pݧ묕NWY+uq&r䧑$`Di,JTEߴ^^Ew1ņ }GLJ)ɒ,,’,,˲%WιG&9Sq& h1!arks)MƤ"%<IrmqsvV9m'2.25r!L+eZi"Vym^O3jOѡ9GK"Z?wjO '#1?ˇ`Nj}{@Ǽ]s (^?=>Iݷov#ύZ/V3=ĖvcD< ㏉ cN E)jM$?b=)kG\ ZIζ1)ِF 7=_P&}R KV|yRv4曒zԵ ?_Vt7M{!w"`w1luJ{mĮSfeÂ3T~cƛΊW>S|{rv< ~u]Y8}%9wr"12HkL(竃Gk;Fyڑ=B!ejqw_M5́"7|*}h[y@$+v^m|"guKZ3%qΈMbl)n+a"ynu 7$< E}14Vy Y7vRv7hT*_;DC|Jܵ^\}o%3 Ajgc  _wzw'eUj c!q@BNQd{'R)<T*HR,0g9C3B_5~HlȌ|ը)GFB>Y4PM`+'lJ)`vclC,Q7C) 㩸nFͱq,u|Ur+FBEدjC/AF03yqq8f$TH~oQw>BelNtI/eqyYl^7r=TD[aa͔1$rd\RQTYE~ +erDZ+fH_PwV8}Z 6މE  fo"yԡN8N]lz{iJU;9t: O(\4b]&[.9U$CsԸ yA.s*qz8ݓ:iGwձujdPlgdR ~*baL83)1, !TO!@ZNuԢ|ԟU :s]&|/%9IQ@jmdm] $t >ems{v%eGe_EmŠ&)ҡ++$n!0/6v ΄ݓFR\^f@jy ɺ2=dy}^ &vf6ݦ\devz|Qdg=GbXv wH&`"ȝ9"MRdCVnF1vҥUƙG=ւv7X}\/GEx-Λ*"yڠ]KU #O"͇{~^ʪzFይf+} Vr5q _ޓ{_J5Pe%_@zMCpkС9- B>GTnMhx@}dj'V1FǛ3_X]/NU,B^ٹ^x@WsR3S]kȚȧݑb+?,+#L$&0%ֽ`Hdΐ~s?n%9iyԤ6tVGԄQHs,)SG&#`YŹbPZXz+ؕB`#Z|Pr,N2:|ygD_U z|O1<]#!HHvalNa;5T^ H8B!uOSD- 񱔚4=E;!4v|{X?-'}|8-=; dJguA%M`.H04R(,8P8, ;ah`l$SwʜOMpuXS щ}~_I",I #|2Fü7s%ڔh ϓϑ\Mǝvp!h bK@ΰݛ&\J@3 eQ_/B4|嫗PJ[pMs_$Hm"%I@I/2nn +`Naf#tg95o49*`3a#Jh[S\ ґ3^ЉFpӥ@.n۟^1l=0!_ }cM=/+8\ʨAw^-p97_ )TɬLVi<|Tmr/rS*l?b^0.{1!5su^fkPK/.='2A]钴m~', ڒKL>_P~["om|oU؋"F$ti6hP Pf'l|AIDR1KoIlRZGIB^D`|?B"Iߞ>{{/5Sqo?ݢ8y=%}S}EE߯Eߋ9GaCk/j[== ³u>,֎Ixu$hy'\!vB1XIP\`k;owyvм1m{@B(gNeDFZMs4vf~ c%!0J/i51*}ԥ0A֝7"=s dq¡,{YȺ7NB qjQj>ak?j}ĵJi =x>msii2602o"~J7M3Z 9}`ZeW?~9{wi ϩMԓM9ֲ@~@ahh6V# ô5=ی;HڻAzV#m>~1.Hs4e iY'NOB 2BkkHKFy*7!衤M%0875,%0ǨפSEL1BDaO'6pw") mU+52j6欟'V5O(5^ާZսOHZɡ׷V6҃j.g ߖV3hK~MI$TdbaXeΞUg˔lT?GUj; gi^Cfß9,/E RzM H8Xp5L ;]guzQP3h}>Uy2Χ}|uk!|z__?<ȵ,X~l{ 75ւٍ{ v_*Jr[4 ~qȴףPAo/@78ަM'Ʀ|ktmor09i~>jߤl fs/Pr:߸igj6݀ 9;廨5鴜q9X/eq :7sgiOmbu},9_;zZ8C(V,w{]}o49LK0 ^LK*ɹ5#8.JSrqK+$,MY\Ab ᄙ-{Jb<A8SI0`8etΑpV$Zw-Ntqڮ΀_Oϲlnz3*+MYiǯCqʘr67ѴXq 쑧 *[*Y߿׿_<$|:D*Ő;FiT4*,+ 2`bk='?۵u?":Ϥ14EJJ#&9HILI0 0N?[gtf5p7MRiOGŞ={:*i`#9O_EͶiP8 %.dR&`.!gR9]Jn)8̝ކ(Nĉ=W)Hh!reTtbNxۉZ)񶓿zAŻ1xݗh|Q+zO|I& Q~KY[+-qBFIu=SJa\̤I;\ܭ\^m'j|NKP.*j^oDYyˎʩ'N}Y,~qzeaa|΋>yX*nqُg?['[AiJ 6pYCu6w0矿?6]z_;_\>neJ f_SPxDZ3a2An9ϙSn",u;St 3e gC)~Kۖҹ7eYs9ԡW}[ui˳F?a᧻ҋ{_߾wmA[X )ir[l.Adc;6k!e;:~HERϐÙ!9Sd_kݳWVq. ȡa]#[;=Kv<'IBϿi݁kbHTNgp P׵W90V ?4ޟ :whDa!J{WHhQ}-=K[a_n~9,䢯ڟ7y^; cڄaRg 9b9o};e$cdI((3Y2 ɨR w-4e~ni9Jw [Ois0r;^;JfHuj^{ݾaPMYc?q90ݑg:WPf8сw>|L ԷAn^/]V⻈Rv ޤM`P}c9v-hhj7rӫY}jG/^*%÷I$˫{ MՅ`[|M9_N$ ȎL,kTY+`_k·7Lãqܱ)o?+ ڭvWh2&2+%o6Ul2Y貴w;.;:?Ϸ6G3Pr?p:i9h#^ZmTRs7S?mO^sϱs{5?o#wIK˰Ǘ\eT{W3*2Tra>!Cq)-gi]JIHGc,EBc1) HDh $"bQIԯNCw"DC ;JrCbaYgJE6voywCY͌//t™¾8qnt/ o*q)]?~CzrRz v~{hYavYae=r)aPibN% 3BcaXqDNAgy+˞0o{v]k(\VsrТXR`-J(U|^̔|D$Pq#ebMk, iI8$Ac*0"Nd*D$B1 )SœX(LS L.9dB6M|BDXSchDMt ! Ii 0:dJL ca)q",T (bLy$I8w1( sx}ٍɏ0SY}s|[=4N_Vּ%&I f@pDR;qvXgo YSs[Q (_]8A(OiVP9?q7`>3aZO0t(rTHB+^O!z*=tSq+!h/IfzU!_v < ^9H͓ˈtee0[2se 3a)hC6sik@ہ25CF?\DoW u8 Y#h* cÜq Y/DzhR_?P7s.Uk/[{,YQ+HGw u l`ӽtwyr5lL`x/z,kCA27T]hUޔjD7Nջv_]k+| 4ϴ*8<[ۚ~kp!F*S?[vdMz eWK>l:,R,Qedeiw\v@1urwo/mNgDB~drB>FQ*}xRINg{Lr˶c^>oxl?Ր%--._5\s~`+^S]EϨʴRMwY*܇ mØyIU݋k*qJΒcQ* " $7$Č4q,B>1k\Q|Y@TIR^k(>^[+ +D1[bYzU X,8=.h Iʃ)fVŠVPm@^tPPN2`Bbo *ZiVlj[bU,\tT\* ` Ҁ`anE(H \QOCV1szUP DZ" ODj[bZJ3-@Yf@ʬax@'8(xr2@Wv [;{MݷVt_NiT9\(bDDhNLbaŝ˿Ut$CIx#]V.x9;ק6~gV_ς^&wZxxn<a^W&I;j(,dE0RA 4JP| LBcMbB$IUX r#ø4 XQ1sI8d󈈕Z|]sWSG}*l2*Oc_/rqC#_w1ܧ ~煜qխ'}lb?BP}vLRTʘ۟0> )Vkt`DEp]/qan| (YYSN#R^Ws6:qat)ꨴ` sH0vAbD&̛8B S`򲅏ySH1^/}q|*/f X3%~~DqaLLzuStY:g7{;-s7cQjg(Z:Eouh{XE(xS7E+W,ooƢ(J<)Jv/܉JM3 3>zvpV~TpŃҕ֥{5ѥ}>!mI|vr[-U^CgG"!I(CBb4DnuvMMQ*~YHN2Swx)kޕ c*~:V^dDnx]}E,8G) }r" II%p, 9|fH)0wq,%pnvn `Xrp{cïņO E˥=g>fřeH.{jNUWGK̊l>3!cH&*tr{"b~J2XEG%PAxKvtrS9FINpx5 ~kZ3iuIs74A{+ޅ,R%Vj0*QV:]]ٓ*gc/G3rIz R dF3wtle/nuI.'橃1b< Cϻ'Yid#\Wolc3yKDRdL*Y]A9G 7)p#]V`Hsh|aH\'nLE_Y3o)CE}If Vh`mJ\0RO;]DŶ 7"'o.*_߬xڦ2utYSͧgv}'iÇ}FxBBHWw8\*MK VZk˳zn|_jyPYmV9|;b_m˿pa<8]W 88O/qɳoo//1V}6V !tZ$?[w#x^W%߃B1Kl$x D22(s.ECqY !S( C܉MNZ1 +EYpEV7LEv[\cMDC=|.|Rsx|IBsMoP 'jB*hpњz.BiقMBw%Rbsb5-q`sm r;97$0BL14  T#k!ix:=ͽmE0j uA"e塪dtr@|}(OYLK1zʵ;Q$z q: Ս?9V0%صRFs-{`hN a ]fXu?޳n۠Fjujdno a'c[Z899Lls $ ZYAd-ٹ\X dmh_*eXBbF;x[%JxX6N];p/l t 79k썴Lmiclt5k٬ӵQދͺVkg# Rb c!nVwoF)h3t>VT<UłDGn>'DUPR2ٖJN3skȠL)6P\3D@m.UXG[Z]V'lL=Ἵ%/?ll{⪎"ٝv#mSpN$fn.ly РbuH@y.jis>YI"SZPLE3ӆ#1930jc.j;9Ac"laydJYPiD})Chb̩wlSK_AP/l!ڶVm1,!nG MqV{tt/GMm卍1z,GgyI-(^dG 7-@ťSEM- \l_qp S4c!ioO(ڝv$ h^`lrtò1M^H͟eoL0g86~E'FQMj4#mbˣaQk#*sW\yrK=2!/f2d$$ّuugd[Y7n%z)Tۿֳz󳘗+0ˍ4>DPgL%춞WUKp}~CM}!bJP7JWu!Cabݛ<^ 1N_PUH[;(\lպ&☭v!*R,"yn%P*vgVK.c5-u&rXNw! SmC]1?P1ڮfkL%}ēׄՏ8K!s|?yȝ FI,}2~׽Ts'=yW{DElXȬ~ԧ 6cz9{n[C?yWj"?Nv"Z 9|d~J&6؟2;RSL1,nuMr4濬gV䭇vev?ds\ﰌcM,xD K\9S+UyBҸ!O.3%jO粩]xpbs B$XݣS`'`fCnW+:Hޚ)X ԴJ2pk;.o/WWoe|p}ӼKNFZIھ5% 2`DI#H{(}BHpºΫrZFp#{1`6ں\62t W5OևN.zr'u|#x;VK\|:|TyG[$;3@l ~~,y!u~HŞͯ ^iwE:&ϻ ~Ѯm Mu߬n?G՜ȾZj][s+,dSgG­qQ_6qNmٍ+N6/'`0m]X$%NocHJÛ\HZ|5 =.p\5jdo.f^̛^k&ż;/L|qaIBOFYGniH.BJZrҍhAӌ&nOOb yЗ sd²~{N&w?Tf 8` Am~?g(C"MS~0Ce"dƚKrъ_Y(PI )w&%NU2UhYqdf">iar c\'2M9B;DeaTircf!g=\(:-m|v9V10Ybs!2X"$T^p+XL3/3֘/O8FqaOz|XlK?)XgRD.KlF]6 )tjOr/_ !\1|s^Dtuq2ew89NF8f箇4YyrW]5| m?LG7gSܮeR+/filhokO9gw>/txƉDKZr(b $0bN,Czi+HolYЪzeye^@ٻBќh'sBmFG 4gI&C3-E /APǰtFRu1+Ρ6enC>B % (x;]I(x742aXZ<] eJXb9!KoE_U YDeb=%5 ھ R.-T??=sŝG` "Z9~>:$QQ?IߋrR^Uy/e7'"BN|*]&MN)~RrD@ .}9xU 6t9uDT _MjͪaVISɷ&C0cTTQ5 7mg^wiZFk.b{v>?$*kY-V.8Mˤ*rTjˌ6iAt}jXWsjRO2#DʄU!Mi32FgJ8vX|G ;Y%:0.h%Ɇ8g< l{pwV̾2{[vx <ѰZy*pCjͼx'{)c2T$EfRF *AJ${/kث_ݙ/ -[ʝh+!Wq e@S ֔qkYٗߚL~o P)wR0O~c?b'[XwD8&j@5 _xO*QhB6 cގ|پ7ԏgMy7RX{p9Wdc[aw7wpً9<5(426M{loS%)1cbQNxwfttbt(ͤjX=֦$NxZ7N1 RPYBPt?.ʐʼ'4Pf"J[{JSʂ!s)wJ4;#Zj83Sbū,`' wKM1NVr3]αRf?zBrۨizֽQZ!fCs=cIiiX$71T\R6TvtN)'!ޔTU;h#tC6j kȦ{:KTD\:|% HQk,UK)2Z1dkn2 O9٭$"~HSƵGHT*BuL4E %dtZG3@0N6,v?9 {3AE@ܦ7g!2N WuB2QlANUNrd(  ^ eV8N4rl@RzcRExqT*s*~?jCCJiS,HtRc4THS؂BW rƭV+c [UH\M/t:tǏvPӃEnj>[_?/^BL폋JLW?uxћ,[lǓ震w773&B.߹O߹C>$[+mzM1J lN/]v\?Iz6&` #HSMdq*Iz*uxeIz)xhLؚ5a &IzXg/ s+a<]NNӼʞ5頔ٍn"Y G{q MqIOo!ol)N9E+bWv;:'_ۙh@o; ?܅weE(XwK6#af?ڡ tAt]NbDIVueE P݈2uMߑuH4r?Rf3aG8zudi`ߎƧХ[{u\Ӱ,ׄl'uEWZhq:B,~E S,8M.:egJ`״L؈ٌai?&1keurNeӡĄiZηsoUmTvc4wvc^0`(Ć5d۔oS7a-1SgDHA%"D(X]FQL/q⤀i}8U|+G1o>ʨi8nov5otc) }7@NJƌkR),x*+#W (ѪF3uϊh1a8;GLwC7g.#kWW-m¢bhx B-SC-0&wS+`7r'ԛJt<~~ hxm,PS _DzÂ_ E{zէl{x*1U1߳䂰 ?@GS,Tӹzc;Ї+ iNˣ.Gdl< ;-)LzxCPW"\".Fi92&hw+ǧGװQNW;y2!쀐t ` ܍ ɋ3i0pVJ (*ф.EWb+ 'G#Fa ̸ L %֞ Po>{W~Z[YP<) ܬQhf,iU]Ya8&P[ЧFIS]0$y#pjp97@å{|%R|ɣXb'j-;r%Or!`z& Д`/~١iq˥>Cs(+@ǧG's!.,QYy  ƃ FasC=PQ@`Kf1JM\"}1:_nYvIj) UXtZ* yFZQX7`V7o߼1v&sa*Q-2( > fQ034[;jX3{ص={Qg ) UJZ ɝj"JBlE?z 1$X#&&. yk$a7lug3D`xTOfX\0~X\.1ıl==ClE]QaQk)!8#pJyeam `Ά}IBa!h0DkL|͐8|NȷჭpDZfEo "}.UY~P1i[\^t'P㛗1Y!S?̝YΧ) ˧ ژ=@l` Iڸr^9v'>v2/Snics#&bմW\éb]`]2lV ٲRN Y$]s*`bii0GQO `H EqJtH U+'kZgG7O#o~`"DEn3=h AYt<}N{&ѿguE2#ji_dl8ypX b%a(ϰ,tEה2)%s]Fp})`,.zHשu9P1:2X#@aց bs&8<P19T+,7' +{NeU`+ 2Nj!8B9PxT!cLj9#R Hvp]]rjF& *g$ "HzUQ,Ԫ_HPG,H0)Td j] @ZVEXSnu(ڡ-t5,tu+\XC-7\#~UA)a*LLi1Z*!Yh܆{a9Dy߫1E \|TJ6m.PÕyI}N?FR`+QkU^)oYmm&Zxڇf#*buO=q%4!HitD62O\tB`8Ike">=:(h<.k{{\c.WEi+Ye&@6'F'y\)񴹟Kmat 0<1;s_2#δrxJ (q7Ym^Y&8 zsSAAa@矃bfR9(LqP:P$x7(QʁUHL" ]t/Eh7kZ0 kyL&ʡfSdIJ`tM41T`nbw)  څ}Uo' CĦ)hr']8İz#*Akf҄ 1P`O95*($r3^/HB0/ \##{W甜I8נ&-w*xJ0B0(e1Уi sO7Fw:S}9ڌ(yv"9#-M8#"IgD_58 Ǧ׊׉)}͠_d{M-_lyAbWޝx %}?>^/1*jS&Nj 7b$W_C=vu< .Lo{:O??|ˇl(utA.]~oa Ƈ?/!t ^<̫+w {-: e3 %PmdueQP3tn 5z_GS@Cy`*M f!Thmڑy;r%,[c&Pye݆!nj$#_k-01ۍXq-: a]X4MT NgQZ_RBA( dqi<5=IiunR$iM̈́"rr%D~)Ϗ†Ma{)y Bj25^~"8CV PT˙@s=[_]ӽcNa/J#&g־&Cm݉0  }g."EVy㠁2LH~Xq,v`,|#*BŇNje!u (FúV8(#ڡJPڐ>B9P;8CaN3jUȢU0ŕN a7$j)Mnj#,a8ŏ8D{* E\f)ĭ-MvTlq eש껱*7wfE++ ?)4׹evpIGKnтkY]ZYCz1>iJ΍Tg?SG\GьZ^V4jI466ǒ)amYIp"XSLrZBypْQX.8UplLPV]bѦ٤%e&kʻ%eZ|ᜤ]X*=ػy-;ųqQ5jH{;–RA t!^:Yz\gTe }Kq.x 9n0V"1;v}q (nxY1AepD.z kYe " hɵEf(.*_{2-&$$eJwH<: BL.><td|ZO_oy6}~yhpЦII1A"-ɬSUz2Xq~Sǣp|x ?Wf߽e CS?Mu15qSFGg߿ŗyEkc_OEjgҎjtϗ FYJe; IB^1,Bp,۪qэS4%<)?~\?EqiV"mNZϽͼxdhYAQ?_zӋ>T~hr1RZ^ne1fJ^TSWAl[>> 1Όaf~\nt [830DG;zK ǚ.8{|Rr %!Thě?8+)+p ( چ+ nAY~J \ D5bbѿ=,jXN/-v?Cw|wN3|9,'vџlz7Kou[oBo-.K F 4}gkzs/MuZϕ~|ЂKO{׼X L*x_!H[ !7/x)r籜yb~Q)M2 ZҎDx$2#oy IWxȕ}Q!W 077~02%C-H !u> L|v2.0 Ⱥ"4EZ!O! #0"eA3!@ :) a0fx\ A{fDz+7dP4ڳ*5S=.,P0{ A}"XCB 46P& *0l9.q[ x{8@ EgH\Bp(%[cNR=R' (!(JQGRXCtFre\lC`0Jxt'MLMMR < f]јcM.Ҁ$E(D-\$J o #NDJ|`q*!ؘرM"BBhB*$ENV+@x$J葢xK{e(xӆkA#x[@)S#%`TJtxV**Ui$Պy:1ӉyNty"͌L+ŠA;Dmw hr"S0x$AMc"%E |qdh.&W{E$ #mjl!dHe0," B  "Ei-H'B oja6`Db9"`ˑ 3CJ~CW>~"T8A v(4<l9 4sj"{ ~Y҈%x[9!B JE YɨOtH'NtH'?!2*Enumz5.}-[w'v!'fYAv6+ H+*wO ď-E⭇@cA/$Р$qTPF9BxAco2(ꒀUrA` .ښ!7NU1 H<і +`Oe@ycLY8B&h`E l fb޹EP,n+YXqkiBƶA.BoXx[zX.gP[22 G9_uQwRB[GmG-Bp*$F{ܠarv0UFM@}hlm:3^)aWʼnCs%Hzs(SI8Kqms: ixSgHr乫[|uB˸/X5=-4؂.|Y,<= 5{_~۹wngYy8ر۟\rJ?.Vut"%V475MNkhh>6 7v[7-51PNoDzcp͙w!7!1Iڪ- / <#y%!$ӧ$ܑ#Yu#ZŚ!Y֔Y<eJȅy;Z:@Q& S.hl} ޠfŕHt;9?#?|ݙ/Kx}50yvƫ K\l'uʬQ%3j|)3DYϝYXlK׌8 s527:8;pln 4>lqԹM1!v6G> İ(?-r@+@LVXZW //)# Fg:R+rnvb sthGHvN  jUW7n<{Gqt;֍'S֓6V0G֍#֓Rd!hֺz2Q-ԺOhֺhR`B(rܞ/ּ̔8qsaOז_P 8hȕj.)n=Q(*5$81Y JAF7 Ƽrp]_'82ꍮD4Np^r3\#8I\PV8(Z7p'HNb-dݔao8tV]Ľ7f0,:L9v}:6J8ɕszٻ`龱طWQT_L&?UR oӋnL5r\ >\MiN2t_7b$AN)k|Ԟ_'udww'֫n@%*R%ՉITZGOOӬ:pkv87|GNQq!6#db!7BX&FX^**K 0ܩB]tn`cH7}?5be{Ws2,+ԅv(zEtJuNп _E8V^u;ؤUKLgpי--9xv 5P<]tgE*u5R4rqҽ+9=#tGH9km#b7%u Ҽ*9?jU^l+xn`Qw:(n;'ت&vRV)M B  X8\hލ`TDALbYY {3pKDe^>{W㶑/wv?' A,3QEVHr;{J%J*Eln<}s,߽dqy7g j.-o-eD\eeYzk"tUE uQ?ܭJv,7aघΟn)dҁr&2sԐ1nśWfj6_*U^ݿ5$Bw > [Q9:XzxI6`bs N>qs((P@960Kظ5rpjltLX[ƂMㅗVm(qDR{,F#FQ!A[ ;,>rQBQEε;kcHVIE'6nC "cF86RQN Ϝ-`>i)1'o:I;r_+ljv\: ɬ)3 &A)y4c@!`ZdH_K [ `ܺ&Y,ѕ yZ  7??=)>E# [w?<va =~{X1J6(A_p.͛3͞x4?_y{6bygǪߡ&&A7oO. S),OA;b;5c<(`ar6WW0('" ȮoC hغA" ]N3whÖ/ֵ]r_r7?\+uTP$8jm\ ?،RשXN "J=gw[QI(uM@W\م6QrFԑRAMEQ"sJE@et ٔ.AѥճBC-3ycO 4O8#꫺B4HLYM7v_$@QҌeFU/WDj<~ًWdPt&Nip e@m6,@x߷g,'~~}9!&{߳f8{5_~/= w4 ^K՞NʲFt1]T:GV/jC1Й 8T. l;C\(uGݭ Lpa Jpذi"h  Qmv8-3C9to7,qכ,#r /^iw!>O#ŧH),kMT-xҊK B6Mghw(W~Шh֫0lo3K灱dlf0?<;u4 ?[ v.JY7֙8IcNqc5ד9L+?*9'M_z$<`r•^eL/?x*kUzjWHۻx {GS}hOKX4K-͛iV2`κ\\bg+ļC y49E0{ U,WdDk0w)6b`[Ҥ)MfA)]7mƟ7ܹ (h@)>Pt&gj _(19|_&| r0q0+`#O(X a8|V`WR**A=1a6!`.%[DX˕Â[l)P J th%,A$"dxJG in`Jָ@!+<79TH:)i\5Qˍ%fFz4F6Ԉ;e)Zz,AVDF*0MҔa*h`=9ˉf c)C @pm&Hع^\*Jz4;QP_h^(* ɝRyδ'u]xdf@5#mfm@XfD9LW1yF`3"!hTU6QĢ89pN69¡ yF;L(E 'ϓ0- gğx(!(ϨZ\|r =#D!A!oWߛQA5|0ö&:ę<KvSۺ 媛][|`3S_E J`4%d eTYPl6ϸ y\ۉyQN{N io$ۼFZ:h"D~ȚEqR̳+:16 {:mW,\m$Yp s^Px^"MfAٜ}t7Jͪ.4xo k8pE"\jt9VG%;//ȌԀo'7cKv.= LDG".[bdt9lGHV_@8 0߬9ż=;9taCRA,(QjKR+]7><*Tẋ5ߐ`WGC!cn}s2HͽZԉr=祄hVJjq>;~4+}:pF|8gx~ی6eG;'B],WJK!Z\aG0!үxc=}X}";'8 aN[Gts& 8AcBnl$Rѱs"rqp5B)ie9?rpP(Ԋ[_xC GAL3"fZi"ttfV=(#:e}3Si h.AYyntj#M:t8ZA !Nei6@lE0k ģ*Md,whmCƲ9Ȧ; 5XliQK:h ?P$(3q1 x ub1Qh}^|i$bYBm߫!*Mg)PqEd(oŘI8 (sgB@1M3'h$|u6VKw1{LJtāNHBg/_$w᫞2 Ƃ ;)5JFEG.Ĉw콪Ow 5eOuobԦf"G,jD1'J#)_ &(G/ubkD\<NRg)Z+!xxY(!L- "(e ^6($y^uϗJݑKz- X=& R wS)]cLͧ??&Yw6*Q*DseF:$F*4تyhK1 Ke>˽(*?$b’&r-LBGD&T-T;[6ZKZ$j{1R,!hQ%эUHvNr3*P[MHi*̩]B)֪!m&s*2MLPl=ͭEoZ[6҉#`L\X=spg<Ӛ>8.6b*gh Vbq S"gm(.cԴ1{gʉ{wyy܉pȁmJƖw uw/ݢ |LQ`0E)u14EFKU 0W J;J wpG,l+{܅:q݈v5)DL]5m4>!,Eɳ~}RJU>yѝ2'u|NÁDD7ݎD@P>'r'3FW8{s.13޴d_󾕁R=#Q|$ǢM¾jG\ *r_ y̸cpo Ɯ͈&޴G>!M~BanZGmuE[[ŏ{ .M^ʋ3.2ǨMe{ݏ/)T `|~ 蒶&y۫pa-]ۜc KmF>&QYdˎs0©OUbB絟gxA:x?3I 5j(kq9IJ(}hBUgA\hĄBH0n|jq1j]NI RcNU UR聺Gc^0 1oa?~z=hi`y'~z5]\b%?޽9+(aaQ[d3Qe1!೧_f` j5S$’"iN)d{ʡl.^ j(A #傌 [w|۹,r+8g_n~Ϳu`s~ܻgZ{6_mi"EѴriQ4.6r4?,Q7IY>p\>3;;;3GgT^}Nܫ|mzc"5NZ-5cC4~؅!kL!uT5Dn`pj"b(AC q(#ddqUV[ˑ3)+P`Y &G?i0e:O5645Xt`p1vtOD&c㏺Hbetc3OKwm> e0z~y)9&X0).s-ҬNnft%`J:FyFq:rm >sƂBOwnN)DŽ!&tt\Jb,%\SdN}y;@]ayRԧ) gmv>Il3xU*N\r>9aH?<b0s$tp8IAc|=ZI\D0/-dJ$n5+d*JPs jٝ|ʕ?4akc w'kζb:FTu {"OLXw̖ JQ+:m5 u7dۓʻ@]:@XۏڔˮږUKJ]rjUnRW7)nF(ۺ}Y@3gOK-X -̗(gzexr=8,ӱ΋u2YJCb5[(YBȍPyxw\{}|Y>IBšiqȮb{ U0f Ax5zwfx>ִ`&7#Gf{ьOIx2}oy%ws$޵ `yݵ=!0ateWZj%@:Pb!*9R.{ LSqpq ~}UA DG|`*Ka $1FcwKh䥧^Lv^L h*FG,FyVa-y?=ԚoO͸g&Z<ā1btTy>)*GA=L'<-%:NV~ϝ%g k`q;D[8㜱U8hJiǘϚRذ血EL" FhzhaR*UdĄL:vLMp!!2k' L8 `q/JQvsP:Zj idn+w1ąi 4S1l eDnR{"簹+2_ m2_:2% H % ,Ura.X[ E @chklLC Ecp*@p¥x ݎdZL_Թ$$ d={vy 91ac\ֈ ?*^,S˨D۫3>[ya ؾ5q$4{Л]n alȫw;<{ ~y73i\|~^a$cn?  x8\ğr9YK Mxq!z\UY.4}MI׹@d>WفTx< CJ,=.}8G_%xϳfИD.q4"Z *x8'V+ w % $)-)KCb&YALBfaa% -ņb?'ҲES"z>h-zqO~MZ(^+ߦݏF36uAdS$r}Bٔ qj!9fPAi;(Sr)I ъmuՍa9 U|քa&rBSh%\ur`9C,UV .`;|iނ__-s(lnj-~ģxDan,'i:د25x59a#%A`XH4TB݆Y66T>_; ͦ8hI(9Y` 74#6kMKN0+@ky kKKh 1W>0z)& })Ex ,ׁ$[˔0LCgE&cNa$@*|LluP󒎊I0읜Bj@1&T9QGicέ % ["珃D*DBbvZvQy[Zc   6\@|ϔZ˅Ju2;';$rAHl;ʛ'WhU{i" ]e`,ѤQ\gU.#b-"_VzXUneXUp22QtQldbYW|} 085Y9uw\{9$:d mΡTn[&`ߜ\JzP[ޓcJþYX8]oI^y_gna&z_xêo"!ZńBR^)k&?vkZ@~WZECOuP JBڿoaYaӋιM+!l;n0ZdW:߫[ݺQ qAG* KG+uTm:BI#X} 2߰mƱ0xGĶǺ. .>FkAsڨ&N"mo.l1X!˛_*]W<ܵ8ي^j˺u Cv=VB$Ah$J|AR [P=>|3eݩ#06!yPF<]?tDעؓ,ZNYyCM $ L_]ZI!t% T&^`*xJx9-sj;/b}//J1d֣w_im\8>uV j66?kU!Ir)lxdP[ dKp?N@)k Ov}KҖ?E/nhZ7;Iq+gY}>iw<44 кi "XF=>O=~@侂\7׭MJ wꓙ(C.U ƍtDZN q./@Butl$Q4T<"c8¦ +6T1'zdAS0u6|f޽{z sq|t7P:/ч­tralAtnN&t^1\z,"c^CZ^ɲ=M?_E ]}yp~ysϿvdg(2'޺'\1ŰD{?ƿ T:OU2[o.|97#&d@˙}?r8Cʵ9ϓߗ#OOvp${1[4p,wC@򗾇R0\ޏ[.gU0ގ*;ߒ~J@AGP.NFgxѫ 0~ϗ0M~(Ld ʁZXô#Sͣ:#ſ#m2?`sDc<8(sI?$|,nƜ2p#ⷈ̮'q\S" qawFS[wm=n=N~JA=pd'W{Y3ix4-.lu.#cZWbYn&EE>$GVwԽb:MSw29ӈӣR<=dTɌ_ 匿Cgn#aBoM0V iCkꕓH9ᩑ&rV@0),N &N4aA̤@86#D,̻sH}no9·T,/\%EANgk}|]=7R2.~Lz_` Ƀ=խ5w;X[p,-h%L^!`sԃ߮}*]>)Vo8fpw}?H7*(R7`l o]Rx+62QKM6qC@.T|N^]"̵ yeg0y*-"l\h9"$ mGv5sHnG܎"EXw[ϣݕHx;Jg uYςjoEI#ACirU֢قp5= |V7R0++:wm~@aPlx<` gZ@/j^70:9:dkpc5o4=7؅n{'AOZ|<$r&$VްM;XbFl3C&%q’q2?L\WzuSkܧxNӥJ^;I 4O@B@ҍu=;w1{3.Sg/1Oy3ff=ԧ">M8q8I/d ).z״\yrۘ[ܞ3z|GS3k#0q~z+lG^acs摃5uMzԈBSfge]_qaQQ8WC.vJ%:R1P Ɲ7:q|>m-y{ńv!YJ~#Wx}z+8u!8SX'Y4~7%/ *%+(HTO]i/[G#>Ⱦ_!VeS\;-e؀BK~qBpnvN y狭@J.6F9=J6݊?C0KMoB)_lln0ڞw羭k{7<[=-sqgca[M"d)|g)42@15~Ϛf-.1cHB$ tD 8%N@(xJ "ɯp @g}\Z$p>JP\"N0(\b)rВ  @p6W:ev5XD J!x6h AjP)rRŲގԏ<;E6j͞JU ^|DM H€WϯH$c0U쇻)C|Q Z&o 2_?|y5w V^}G%C"կ_|wqT 8WL1ki23)=݋$9 8VֺSrb<6$ĕ@ƟonPYZMR)r䬺jX1JD@8˕̱T`(qqP9W"MUPѴD`GZU-N| zCEϥr]tQ>D]G)fU&;3W9 m~ыxQI07AOg9(H `lAB).t ^! ^Xσ,6'wf_Ayoݏ~w&6ݷᆳ~,gGN&k"X@Z 䩶%JY4@;ͧ|AT.˽۾fP0ĢOo^&T%i_m|Z|1 NmtbQ g*9U XW~~DzgK_&-Vx-G =j5b@yLYoZticRYŌފAlx'*!N5v}a+㼘ً_ڭ+zbfb/j#̸ئP{ۇ/hsvA,2^",^UJxVfR8+wߏ׵ؠ1$wy٭3pO~ 6p-.А C/NqD"1gx Q{N8a Sv{9}d~B Eɹ -\!~p8%٦lu&)mgΣI`) >T0 lݽ.cU%9蒮D=H\SL,DjLF \B6`y{Ȉ,l=o8'FH|^Y}vV,QcR8s,qN<<Ld؝K[AS/ a[R^>%B%#krJpiV.Zt`FSt"8${r 񠐓`j~y+ApwwuNRg]Re,vun -@q| h!Q6s.\ORSKT`IX2F]@(p b,LtGkU80j -jG?[4ׁABNN G\JrppTw Nv'v&ꀟHg{nˉP5Di}I0?mvgtJ P&=jgv Nuqag8<kvڣ7-oD1sƨmy-Bѝ~BQ/!<ǯ7z1)nWbzό{o7*=??/h_߹)>QQ>D*BҞ<[+W'Xǃ+]ҮN̾Η~jqPLt vWk{*NNIۧUN^oAe.KR4hD-qXdYaq!-U̡$"X bl Zs̼0(z|K3BD8!T! K5mR[Y%x9EDF:%AVKJs1 `nDirIL(ܢvM >)Z<471$!JF808Z>uBD6qMG͛)N"=.GJ9RK[ {b6A-Z[ .K;;U8\H=LuT$*X+ i0X:,[([B崙8 K<Ɋnm=VБ3$!jP|uA2)]{R/G[󺌚p1\_]UNTftMMN3)ШePnV}R+Q>reJհdV8NJ>]j=r}Sj-;vb__wj\z (ǟ>Jrg{;w:NDHHDok$²;b*[c>G>ty},:1WwA*+>yp\uQOe 8:>~I~W#~)NJvƩ($ :OqoGb%2 rLՁDžA"sf=ٶ.o%(`P;+:RSusR3o/)@@ "tgV}⾢p;ED6:NH 2R :Ǚ#%eM]~N8qП3@JGO8 qog&]݌ٙ6bN1UMN9#&Dlĝy3=x T[*r#frG̅220pkO/CH_OW{qwz훠&Xݓ 9ux'lȪ.b1zIߏ@OE{VW6H#`_ͭCҚ08G`ϫIv8dvn>i\gdӰpãTޔc$b%X KDf %˺ $0M'YكN<W )&4@נb1C[Gk &) Yyzh d2#YԏMh%%fr]Нӻ+vX BGQ1e_DeGR(I!;BU8YwCshYl9G$gNje}a~6S$)CF3s~P Rhh:oE7x-ӴAi0e@$0pNE@\xV9j?𒦚jd%kEseHyӖCjHbQ@=,v꨹7OW烹ĝm׻p(X:cyJ<.~~Qy_h0ͺ p/LzW?h= 0䙆KĚ)Mp~&ϩ! X}=x Bb鳟i8TiPSN8`.z dY|{*" W?xWd KJ2hHm| I%>>f P*zp98O9=1)%H[9XO|y"CD,dhZ7O6Ȋl}m0x+H Ei;xKjj,ˌ5rfiD!%%}wOvDVϼ[PzQo1f@$z ]$Et¦Iku-,J!?tIz  3"Uڱ5z+tycJ;FUK8&0%,[ck+|W]OG'p0|ubTF) s)@䵈:)VE."cٰi塹UL4rQz^;V|G:# ǫ"p s"lj 14x.|ެ0u>=N]Iu3m#2?}95MV0Pd$U(#^컋/|~|i >yF^D]Vz;(:`~ѯQ ?^~]֨ eH<Ԭ )ZKHXg7@Rd#֨_@V_ NvȀ~X]f;]?bEq!,VjQPCBD$5R뻑%RHsQ'uq5k.\&55s9L5n(@I5VTFM`ld$Tmzϓ{?Vr0ې ٹLA9&1X6d)?vσŝdo+uQUJ:d cF*J@դdd59 Q PR&~]ȅBOr'ƨS̭70?l ¦||Uq~2W77+]Ytfb[W\;7^fzWw]^5J}Okf{,d[LnVԴNjBDr7їǕaf9}jvN{=#:!?Ocw+ Oφ_ڰ| &=*jefV/̊z65yd+WJR_u?C9yJW)ʕeO\p ׬lr\TqQz>۽-ө⤹Eo'Z_<&%>ü}dכM0 ?|[ݔcƘ$ KNP W:Aܕc(,Y0iX!:BZ(~!aPp#(e]$rc9v}| 99Fc31J 9d.- 2*IJxX_*2D`-ԏ4H`XuadnA Ăڋ#֋ǰ)Al@Y[ceMqu_UBAG%J t6CMy PL‚g"+SߨrTԲZ)nv~MeR\יTQ(N3{뺎Pʝ08*N$He 򪨂= 2.#fA9Fj;q11YD0p[P"9 xD tBH`XA2RL( (nK4w4'{+*$੕<٭ZA"Ӻ,#Rtzb\RhI+7dV`Dk>^܉IY,m1+,u0R!&d>".F덑 iszXMcci $c\2Ht`0 >]IǾ`}qFS H1:zߘFH*^YhN 0ݠ|@@:cy G s,fRE5~%!Z+%B'C4W#7YNL,5Ė(r O6dGa%w y 2N/u0³11[~s! O)0,gBcH%84#}>J( h;,)'2FF+&~OfqȱM ui("*3T6dM@j|zqunVln4E)!њ7Tޛo#{ޑ,߃xhxsK-;]=lx1 _-E X-Ke Rrz>TM㵈t-#a"QP< c{;eXXQ4ɡ:尔AsI pJVHDd2i%>jr!oo&<7PLM!9+j\@\UU]s\7h;0 L0HC $aֹ_o\&0C@'"]0[mKǕLw\tǕLw\ɺWvm%e΅d\IT/, ^ӊղ"g"dAtBWAW%B)EQ1A֥+p@ـ#V'% '6,+G:jtйvqSwBG ZNY>EI lfJmڦ@.,tp2ItNDͬ7:h weqF0u+"4+8lc԰[77ͣQ]U}=Ee"3JHB5PiUŞd} T)P$d]/†r~ݻ,|?ipEXwJt\g?AE*4=ut>4Cʯ2_5Fx{V_~ȓZfr%7?iMgN?sԟEuF4/~ϥW0C_Q^.~ Ac AѸOK)* r躜..ۧu:3r"̌H8dD!)9IDpf!%сzΞMCH:%b,ͮ'io܌G(wi/1%S_ R7 R !؀\#(,{d/h %&2`65\H̘uT5+C?FmUUsû_o*!\wyz猘}s_/;0dQZo*ZZZ(?ug | w8uY8ĕ~ls`tL6=}&4?`Q!Tp4)}TUzpWdfdç/=٠)W^?e d{N105jsP_:L#xNFk5y[<!b:⭸BF 7lڢZ R(c4 z%t~U W/QS֠~wH+F9dtϧ8 6bpsfxv½UƼ$  [[@I܆JVȺ8QyM ɑ/8rzt~S'?$Wzqwy.T$7B٦jN)FWᄏw7ʁw5#{v&5Ӝ;J zr?x|?? Yl,Cdt(kX㴕 d0KG IgM16&29N)! iYf9ӵEd]LX@ yw'ߧ~p?l UZh 5^VÕP. D8G=Ô2s/\zd.s 0b:t ܢ3YaBN (%e6kA 70TP1YJ+rtتSqY--EA%NqHrI4jB2#2/xR` YI%.x}ߖ-M ~< }{^;# 5Gn흉^Pd +uzFƷ%rmG"ips7 :t3ۍh*ii8O^1%,dNTq͠ ,R܍OJW3aHK))͓v/٪$QϯjL͇Z4utV]8Tq KoAl<^EUHo-8J ;-:D|=v»`[u`ZQnsֲtWUR-bFZc1VF2hɭvNmǠ%#x.ۻ+NyVA'Ȕ׉rʪKI܂8P=loÄ4)2rF4}5*:Ќ v.Ga;bq4`J!>ZHϵbQڤ\\3'oR69nH+Yi T0 +;A[ٝ{F#jU{[FZ,}Hj,kI/I!EkXMпQ이$*(ٖ"nX;֤}>圭BUh$?!VʷɄ "1 QR̊c!E)DT ߘ-9O^i(F)W,9lBZ B"ĞOh䨳YE Ki&6cybFPddļ|ȸ:U :!]6R2% &M&SHO;>[;+DFAq>k1P= *\,n5+ZsMVp.WDzCխp\3%+']d{we\ԥ1_P?S[{˱a\sAl IY&jk 4Քh\z/Ub3}& "7?2-2݉Ylfe![]w#0݇ko9mZ^:P:J|`z8> Lu:2y9YKG"kg!kCVr~ C(eC!Ьp#xjšm|0f#ݧWu|}:CZ: ·!OY훺nËaOX HWT)f=NYE$W>ahOҗmnowjZ02Ec-{ :*}&9JԪcx!A21gy`pX6k/o#{6m)tpϜWUkG&>rKnP|wzh~N2fpgMi9Hi> 6lj1LUx^M E}Tdr&8Ǹ)32$ hy, 7$08\wWF={rˍjp.:^uGx`I TW)eT8'EğDT=o`{G6Ow'hcR{v1"mt2Yloql^z|}!O%8F{0W Q))Q1օ4?/>I}o6PL$}/GNnn6sj MI}M(.U irs*RA))K(1`x"ӳ@&=S '. ))hAiKhhr74Wҳ.hp-*ZD2٥x{P3yB}PZr.9Cn˔r\ʓE.f@,;8 ,F d#s \O+@drm@ȑ _./_?\Mz)- sd9k7y?nnfs`RjxiӉk2aHl(>*A3U#OkQ~J"Pϐq6Eh$e`V H9y ]Mk4A%2gP%uJ>)449DYLG2D!=SOZN:2Nr':HT]"BPq hVt9ae{{:CkAB@kT'kI /̑(AHy+"e~(H X?NJr=8: ܞ}IcM8 @w2X*($bҒ!sd 6(wç+%ъj*e{6&!۬=H9dszZh,Iij>^p*r}}C=*%:6ȅDH [ԳAJvl|I>FsUjk 8TRJ9h c;_3r]ٲ+6PڹiV߃~jV?W )/zx`W8`QT7W)YQvEx& |aКR ޵hflLkۚ3:=J*&7ֶ.[kS CZ5ki1tgJvEꩶ3[Iy71?]!ƉQZ%Sl9O8ԢzJWi:!paf]O"/%E=$/ZUg+1O5Zs9À$8`lĬ1m/7.~EL }j_p=%D&"; m RY$Xel$QWj3hkkn8W/[Oލ+ʖ/ǥD+N==I Jޕb7ݘKF[20Sx5lr;?aFEP;gW鵒$ Ԕ=͇wRX|ԃԝխK׆XJwK-BJɩig}^k0Ώg|ߏƧHAAE)<"Q\cX.gu&$?O/,O4Y;ww vJgO30inc4~F X2BPcRY]L@)S{eĞN%)JbuY> BzB}O ͿsRgqćRTPDf8 3ƻ:U~"}dNkbFQ[e_tʵv_ 8\ZI^hj#\o17\-z_P)&{6û}cÄTCI0!sƞ"Pc2_mO~IᾳeԀ (=zZDJKIюMŪdknXINX;wӶ/ߌ)!iT>qb =33ˆ@b_j2d -A($elnLPk:Q_,(evB k))1Mo;W+NwF Sdˢ(S:9~ff?9F/S(4)wn> ?Ǽg'Н:_,St`2ϩi1PtgR>9vȧ㬀I<17@N^1t?};>\x|q}/ɘ[xS,tk ?+WV$)72R+š{kKCWxP.sw~T&T`?%̹e;3sdiBnaod6"mE\ [ Kfh}Ӕ Jq_=*=ruu!ᣧۼp'`ѐ Bcs~mX.΋ߝGfU.EO);Pf5~8-_ϧ?~ͻQg/R^"@ku!of1^,?pVGm *U Mø6vÄnӥjڠǑ;vl^PIPa v' Q0ᯖB4QW\`0MZ ]k܌͘JBSf#=g|>K?w9XјS)rV,gFXSAXic=,-Z[Ŭ<=A+Iw#)QCYgʬ/ГΣ#c.Qk#˜%Qs)vJ zx,mYڤB5*&WZGͻjF~&U͜خ 96SNMhB5%}VnPTUs0D$Ā+ICWnVAKآЌf$t NJ*JZ<`z"پLTz6Q;y^.x= )cFYɳıaͨ-&@6%ڑg @vt2N4۔ibM$|\f fzk-?(D-כrƬ˿:"䦕׳]>DPwq,T7 V{u=ܯn'5(ߚrC3~/c>-ZAVg/Ǘj9ݏ߻# !zAbVӪ+݂Z8Ԛd4ϲٲòbQ:ݴLM/%8f:Fأ0 ejZK *JO#y6Jk.Q}gi9P;-{j^Xu<.'0xKq9^E<9L(jXpH䎳΍f Sod@ i-UM;KƆj0h2o8a^*S&Vxz9e Ѩ4C`0H̃7J`6JZ%gnX1TC.aS YCH)&l~GR{5L*Rp2}hM"PU /%*yKkH& D7 kzI (Ih1GVݔ PR7ʓ뒻Oq_d|sx2wR{ $&L:H KIr)ɹZp$M|+uBJPJ=&\ =%樲snPJ<ʻ|#fl-Q.o"zY]Y0W~X1d[SE_efAC.h5:W"R6ڃk=LFg]\w0}mmӗ>ވQ`Hv_^՜Kj$ v*c)W[B/Ӎ&bR$V3M&`?€H_Mwft31.`9/ݴ2R )b!jsP2ML~Q*xxwu%$(O-!KXeG`!Hňd $F! TplHKX.0{ Kxͬ&QDIT<+\pҎK,EipBSG#p. l,bƠD`$ڴxDfc7 Fh ,aVcj&%:ZGou4`9zXȢ'H;Rs,`ꤤ&&IA1p*3`! KZ'`D'01ez9_!-)Cb=pm<5<%FX,LbVqԯ@FQ席xO@ӣg jrA9]> ^fx5'pqL9INh&T)Jƥ@8wPf=7?r0NyaU-YÅ2.MS 7u vX>)Lf2~mq t(VAcsZ|K KD]];}R:7&K31^FK6ڐX-ױ0 @ʨYm뫾43,i  {߄gaWQBjP_,lyQ1ESRe`-Y8PKT5L)Smȭ;B a2i ϣ/¬ιU:3m^_.2'?]kKXgC|5$#܂FxٽǠJڻWhY[GN[g5Bs6߿VȒ@Hi\]*BN/pD8ULb;x$\"MVN#(CY|YUNcDe1k6㱲BюYϖmvqWw6ӹfpN ˥t" 5#C}:~lKOw[1W;f3,fZ50f&c&o1x%Ts͕NSǚRF]6rV^ߢVùe#d4U6G1-6zN %kOZd{|r9CfJX U4Wљzirin~VtoTP7"!x9MN^IUHM9'wo; 6@4Ź;KEt񔃼^V$VtӕHh#UQ,)Bs# 0ta GkluQl)1wb:! ]U{j|TWޚe7$0jtps?},pyC^H-M!19c/pG`l(j)r|Bu ߅;MO1A[m&/oyG߷fN2$>01gp+Tء `3a{tOwNW DŽ "Rs`&Cpi/KZ;@+Ah, ye=g"(4&F$,~|,bS*&YD,b^ceG"a10#, ggu&bD4?VR xpxQ L&>T A&ߓ-d-ӇÑsyud&Ҡ6 68 XDd=k(ʂA(aa2d^p&3EҬF$O~p7:rtruv͌/FC{xEW]5&O1Ӫԍ d1m$ tm~|gwgWgI!LM`״5c;E9+VRߨRL{C$\I>sڙNa05ā*3qVpvx xlڠ?T;Շ6^BsPcl&yYF# k(*plg 9CuS}M>*4vFGTBxsNK0˴S+FC+ q[-*1DYIe@YcFi88,8,)1c"Gؔ 1$M\#/?# -3'/je/1}-tzƳ-khu{: q{ mim-?|bQy LkQy mUtyEp#%{ƭg [':έ=vѭzs6=xmCBH$k 1zt LFSM+p4åZs Wړݯ5:^ TL56{kϾ_UT֝X킠믃ΏJ[ED wzT !)4Ӷ~ڣo|hhbB4bᲕZg˻D҆Ǵ}Ҟ\t i ږOꮈ d54;)+6H-7PD!,Rib`tY_0Q޷W zzT뭝7 cBxC5r+1Ơ&h A4y@d03ę Z#VNYK\ ӂaqbdWT8^šχ` #L>%(l4:r3p=2pccIRݏi0& <;C?> pEgDqлJLfC>yx7)Ѣ\A'>>F/Wg).f WL[~nO|"pm3> Mt_as);?U*5żTҮP;SsIo>͎cPs>M5ɖd}}[4}xIG\?\]0V(1 ~2YbBr4Խ7fi(O`}O̼dvIҭXP(ͨZe ո񰸻]~>\QT\%]Ѩ:Bk9S%p'9UGEAXN)U$ gE*1"j^Zl2v\L_-;I^=H 0L ;.78pCPhבR"`&**"1I,+@:m !ɴxpo5`'`T"uZ- J9e4o c&ei&6@T$Mg%Dg6##$UQD p XM%SL[7$WTfsL9fa3 lK1h4x\bVj9x At@ Ca\iOԖǚi& o$kTmӭ,,8n!^KU'B1x}<^}L?fO̹:iiAWZw/ 0yMc~f4FTӈ_r`xq7H gXQŶϖNhc!yhyw-l>Vi-S4D$aBT[J׾SRz|7U91~bLv,gaJk\UfoDSlosF7kGnĘN3nn]C rX7n)6WVRe<6g w,LmNx&MIesAǻ+nW_b2[ ,Xf~l/ HI,(b[t%KtS BrۂmHG$eHUF0vt~A?7tc"wįe.ՙ%rM`&)섋YM$wĚ:Kq͵)fM0صws7S7iJɗg"!@(g`a)1+%alp`ט.6eXc{empznha& F=g܏6ǁ}9ofETX瓋EUHs>|A'ldne^mR]o4[rvXd.~,{eo<p\ &qfZ"FQFEwCJJŝ6D:ݮw-g$T#ΝrJ1aVJpQB‰T t]"ZZPZ :AB^YSj>8?;{e\bYʙVID{TH[]\]%_7fqIBpR(e ʠʠA-n"RXAwT jϸ˷y!Q##BwF~5F@JG.(oxpk tNS&mS3sT(w$TpaJ 31AE. :YBo/5'\g%Z8"A!GN-wZ!!B%5m{Hc&E@ ڔ >>!n@F* fewg\ΧqOgaVQe}\l^бRчp%%PP.OԮ[28~}nޝ XpQ/plon>;¢;[΍XNױ߹_Bww .`%y_!T@o\bz1KvT@FcW2m{{`櫯}p^LvGb\FF?]lJ ]Z*1SфPh5)Ŭ7v$yD M4ɦ 5!XZ IAH3ѥ j :ђvNhj MHbo$ʜ<s}񍧄 *BvAg8,;B)uk$24r9%2X!Kh<ڝ@HBɉ|_`!&[ܖxAEYH0}r&4\y Jy]uWM>McwE_bD5ۂ ָ :&4@mgnnVggV[/N?kD!奴Ԛ)Jl)_%HK9Z<$]QeSմwFZr Š m'"@,ivNl[ )5&.O>ou0 AAo弘oOnYQ"V=tbcP |;.ܬxM}fn:fP9..I~ЙݼAL-tQXyƘ!w;66]ǟa娗nԶʢ @tex&a uqlB\6HsL:7 m&DvnKƾM} }G^Crq ),kl$r9t,$ pdzZ_EUekm(#%S5|%a7 ![ P,qW7Fhifqˍ. 99YBFA.!"ƒtUi2􈽨uT჊樎RDrW ]INW Lp.AXBYo99KTb>RjJL*★Z'B6$Y)RQ=èVb1c@ZCCHP /!ό^b'S62pu|kYZk6Zk6ZaknFPje/}S#jHSf[)EPm6)Թ+3O4 w ę tveYIGAǿrؔCƐ[ALv3 :eNo4#s rX7n)6ۣatSD$>Qpۓ*g̩ԆFgNeoD[۔D4P<}1ZM|5)#ŔM'+WVÍ2͋0R &9i/5eHC͑uPI朿8ݡ;kxSݹhIKDq~(EhNx؆ntئp Z;X;Sdn#@M.C2߹{ /=i-5#@Ԧ-=$|7֋k'Ci(0tw^^Hhk$6 `D1vw#曵7b!TSF4quģQC.8\1/a}u?aY 23gfz06V3.rTK7>!v~NZ\ [/.MZ%FFIruC[Wyꋕ+U섕 I~ޞ>]YOYCc{^6-j <nX{6bi^݁0Br\St>XPZ* 몾c.ٰQHǤ FyN 5 `a5"EB1C Ҝ9.AifN[G%ZBnN8c3LM>Xo/$vR' X3 KQaD8?&w߭d[;<<^ު;i7K Xㇹ D0 ﬥf,y2'w.o8_~>] qC52 S!Iv ;R͠ĬnJSJml۰C)rHOф= UCHƄRniJt;YnZez_cJv ұZSArS\b*e S4ps:췣#'k(6G= J 4U8ec&>N̋x O?T.E/ų'g[vn//h_!/$=  _*BsKpLoW/< c+(|wBR ` 5?( [wݫ. ;R(u`?JZB ]QEdy.y hd3J)A挎&cC9❢ ?*_NW>{K4?1u_JKi +ֲ꠺~6Zp0vYZfѷbޚ߼7[7[<܋,m#rC61.uz<~Û<>DCP'C( ½ceIJ*J- i^ B мh7#3{uގɲX M+yKfP[OvcK¾"I @2Rke(i $BS*(Ky)ucDj>{p` \*g8vT+ʌJ` !iWN l8SI MἙ= WNɏVojn8^!OLILD>\zYh7ҏLtf0]>-5}_0 Vqr #Q)@@ RF ĵh*4B|=\UusO}dYxcX/,Dϯfom S*kIiuq~cnjӎ sRY`&m8Wzɣ `0֌' Slo9HN8)59K* # ̾D/j&Ò PP'dABwmmJ@*㳵gש\rJ0"嬳!%H!fC)W,Ak 4*"1cnf>7TX$ARK 8480U+J)u&3_:-9vۃg6Rpg;h"t\6eT^ƻJ㹐WgsRrT)mb꧄~Zvk%SVB?8V-QD N)B^:AVIZj 4J%X\:`^-8j^lP9 H8' ع`m|'7w!Wa6:͐V G1̪ZNkޥK'"Tzo~T)uFLE/e'QeŴt/4'eh6M<8IyփCJu8}QL)UIkS˜"O?S$]q ZM;c =Vގ2fjw"i뎐`#݆85͔<55B:U?d5ۆP#'s!py,[=_Dm A43FNO}lM9$lʀMG]r@AyB=EJ-E\2Pc}Yr+_!z,30ԐFD6Iwq[}\۸#grNWO劓">R??>~Բ`͞>>|O~w|(w*#/??Gp?_ΏxwҢsFO;3;_ceWRsLJ|> M>!LO>~T_i(u:PN'Y&yAbUdb0JqL(MCeP.rPi{Pi%*xAMi`HΘT@q/l*i¹ e 6ȥ : Dz̀ j4&f.P_=ǡhiOyGO{|fBN6 )"-н_t_OQE`wq}I`~5X wx1Gypxh b|KzEP+ P^c%Hf%_>=+yjm1540RҐ.8ZޑQ|;j4(8/"a\ JO~;] p̓ns+o]5g4끏|Xh8S1g ҂yp$Q]J@Q]Q]48O)Z%,@hRUVI%TBpNѲB l9m@ǵot"$.ԑvDȑZ*vcDdyy0L9YD(T(GnW^1"pGO;}p}Z7)W+qSM{|szSto쬳Oi%`HwF)bp]a滧fz'I8v>IFrE{j-mLrNnRsAl)l=m^`Z^`m{um>u1azٻ^:7t]4|ә||z]N;iԐ"ϭ ||t=6ˑC !lRo< uE i_;Y>s=7>PC=̯zScil'XqRв24TDFCi)ei D P.iP!PY.[Eč΅սLPh7` $a0u!cnPLr8v٠H)p荭PPD)yYGGk0{ žY 3LÔITxpA!CNhkRKJsSƢ2Dxc 8 ڞUx[^%"Zwg}G p;}y4۫kQzv|m,%z_oüFt^I)yYIKÝ!Ý㿟ZM}l詞<| *z!!5iD[.)C'.N광3u˧In}{Lzʎ{ ,F<8??DgGLt"&nl"2k%n>;[|}xKv2rA?kjQ߯J"u 2F5Nɓ,o>E}^3WxM2i]g'\N|~溾:vlC6/sԡ@gclC1ّ -!F*L3#(4̌ěsυ7M[4򬐺Ә𪿞j|"Z^54Z/!WC_tɋCqBȫX Ϳ !jf]C0Ui_˜519TўӬk)ohyAiΧ'L.,EqDxS ܁^%ˢ+Y͡U XaD=T~xu#а&i۹ǫIym#*df=~O!aNwϞ@G3{G? 2C=zB\v'[jFH=-Zat(bnljӀ$*CE~>+CE2~ 5vBW$φ2D "}[O.WgӠ߅:'RОQ?-;Q=OW(뭣NNԍ]R#hD,ӻN% is"{ڍS(atB&QtscrLt153me2o[XKR*5tw+PCYr( 8kPh.9T8Qdt[0j=" 8ςU_d3QQ\Z%*nF{$TTğ.G\T@0AIo UB:T`o•@6\lUf'>lcN||v$epd2 ~eLUq_/%2VX>KWŇa??> =Ƨ??~~x`^=?:`w@\#/???Հa6?E猐wfvǪmwnWXpsGV+jgᆪ J)Pa/]xCumP6x( ![]x_9bxnJ<YP^՚\/Za>gkB<{@PISpDQ &Wdגrh1ట껁/+Iz$o߼VJTS*l2*Wʘ6Z9Kh0 ,R:4c?z$fh`p>*㺆}̇LGg0)rqՍ\+*M(7Ve7Նl2rbtjtWJ# h&%ܲ* t }bM 5tys;j"ʠEO͒#S "U`Bz@HZ* BPސ獆͹CS]茾; ǺaXgz6aq8<>% 򦮶"-쭠Z6|.t\(9Q~_13\Sɿ+㮭G = ,E ȷ뽌@ld;g@L읇%A D;s_9P)UɃJѨʶFyPIvD*jaZ11  0\9A[HdMz|HmlX:R@nicҐk:A_el bJXnkթ"W{=tl|849Io$|8=ݒJA/G$xȜ="M& -ٯ\Ñ>CA -mL_c@.Lf= 8%%9|SdJ2}.ojGX ˷W#`$ “@]ԘQ[s,!dpՓ\)gz\sz(f:99+ʉbfR:IL`qğ=ܝ}<׵;jwo7ͪ7<)ͪCb$dUX'X'Zuft%&H2Q,yi'uv-Ž`X5[j(1pb9\IXR(hxeU,.3ԣ$#8*^ DrE f.hqbz?zͮ\QeҪ,PjWp[BsRvؠ(< P 4RA0S"8Ĺ\'np[ ztG*`pĠ \JPaB 1]MD<.JVQfJ) U'*]RJGqnĺٌ\]MΛ/se 6M_Ӫ5>̎@vW݇@V p &~g^ :_*\CYA}5]Նbc,`[n-õKS-2!{.f! 1)h3K }/FJ Bܝǿ;4z:괳#PҐ!n5ߪRrʈ`Qrh%P' 1~؂[ ͨ,lUb(AMRVPLJk)q&hÅ:]AIc2VpZIҰ7gLl.-h#;2Ty]#W!`x7ApȞ/6l69V<=޽yF[dZ32 ۳iz(dOe aL]^tN)K>Ob!$2jts 7]%Iі5^(޾:9PLhWѥ#t$ic?^2ty){snLC l9\/o3=;qũ-*9N"| fH/hs?j,CmS#]=RNX -uÃhR*abAE=\^tЁ)ٲ1.F֝:u3sMNNN3xaL=nYjr^h`4~ks1ùS,*5USΉrj=́ךlSwkRVu5uE$# Utr)"Z>I/G(Q0Y%d7s9m/Uc&0A{MN75`m'SeNDBp B(P˱ 'sF)*:^$bZ.C͸RdRO+sٕ?h4$Rf~='?#S^̭\f)=kߑs{96kmOo~Uv/[kFr`nzU̡-flAd 9dmA57iIT΍xDmhKWw!R].ˋm%i6R >vL ɉaݶrm0> ᤉ,F𐒩I$mCg'{YaW JN?Mdg rOXŝޒb9)rÏ Tg?WJX+ϛwo@3w3A{ʸ#+gtj9Ʀ ІyXW[q ?y>, v~>Gf]ҀM8ᓹ_|aT~X,ל,~OY<FMtCFٰË}B<&cB<&Λ O"ŘHQY1fB3OEr- <>YrjA8ҹZ*و<^]ǫxu}޼nruw%b/]_Y"`0*:45T1P,@} F]2W.չF BH0+QWG^)KI6J 'PN1LjWݝi+ļ$0smZnj27s,!WP(n7/US3({֒`䋈 ޢ&z]UԴFGY%;Bqm5цx@ϨB Za[U? R tDea'}&f;]L|~| =q+q'OOy;]eH3~\5e`8y= KpFW6z]0Sj5Wcg<Sš 0nPUƍ&IsP^Y'0c{L_c+"%CG;Pjtqt,*IebtZgN?-Xg$vyffxې;9:*Bzb~E('E<:J iN$;`FA_͙XIfT3x8x"*8Z>B*ձcK5F︨p&$UnpmiしKS Y a$ߝyǖc ǔ%CH($ghg;7zq'y$2erIufʮ$(L#)Xg &IQ̔}gWhuD7S38h(i^}O?r~GK3*LkcﯯF; &-}.9@s{zF9[Ehn vfeYѻ6Dĺi/Oвw.f׫7^YOfP]SvL4WwQU5Zh|5{9]c$FJ7WP [Y}1*e\5|Xb  Q>$t0 5r;f9#t9rhnyPײ73\ô;C 5'6NVONm IέiH4>;hM91BĀ{lgVsAJB;T9T`}&f?H4T oJ}Ӄm )M43ŕ.CGv`/c_xu‘zECJ+aHq2L퓬|鲘Q0ʌrrV~_o 6NM}5w .Wxz7t,}:WJ 91 #  M]Nt8(H`!^;?}.S=i~[?AY J^i9*fw_\li4NjMU2T!0Mx^>]W9TpT kA**] Gsc ""gM=zPYFXx%VBsԀ sam@1f3Y:>)hƔ,rTcK5Ƃ!,'>>hѨ3Й[ӆ{vÕ` !Wb⟽tCO+gS<ۑr0|en/:|#\(rb}Mlo¦ݻ"ˁHҾS$IZp{o9t&a7w$Lzx$4Q{Q>QrQپREUX)LӃkm)zzp} iFpnO$aLKYUKdC&Ym#m^H6r8fH 3NRk׮ ;g4W+.p\20]&w,$~sOu8^4~bŠ"E]TJp*QMQ3)l ZG+G!x6j=k 5kRyB5z$'f. 3z$(|4͏ ƚJM+֕AQ{YktRDP @P֚_dmF*cE]w ˴ͿJ8m2 ioVt) j6֣Y!1c lQ?¶^ Ϧ_eImyqV{6w_5]/ml]/m9ǥ=8!׍Z=D'YxZ;J ФGyUXOVZ]vrcFZ2 ǏӳIIxoYc̐7'BC?[ǥ3U;ON;y <n${"u8Mm]2ԬAoVONm*@Q!vxrx9i$eҙV<:8[P22>2>\D>Cg@ŇZ@_HCG ̿9@ZxHWp)0Cd\zE!dNvl/'^fU%P9e*!V6VA<320絲unIhd-ZTB Qjui@8ꩦJ=\Q#Hl +Guxe_îA 1 K m-S\3*kf\rj)}kI0%!5IJ R`k#BhDQFpMƮH&˕8aSƶ[z<3?? -Ny`m[o޽kO7Q+{m/V t,Oj+ ;_U^U{!UcmZJ .QoogWGrr5cY|7Y7eLr@'/wh0 ~o!/*9iStL@-j$%9w yip(5ݷBc5Q l3:,-DE[@:쭿Sy4j ߏF:&L#.@)^4ޏD:J 9$J5sK&zr5'%|XCHT7mC̿4o#?!99E A'~ Ri<$JjT*xs`# l,~P#˝j,@>TƳW;b&f$FVQ8o}ӫ P0Q:QS*DPy+4Vhp *xF*c+åwwUL&R[K{J>,ˋ&a~s5b!7_59]f~v٬+cˬ9߽ͬf)jτsv޻DkZƅŒM٤)jO+k 3X_7 vv |Gfj?zşݝjg% l ~|i?[h7c q#yvM xB+e9*llȅ Ғg}-вΛ?ʈ{Vm*GRޮد r)%F_݌it9v urWi`-ڭr)F=nFj\ RX'w.p!.Ҕ|샦jY4}<ݔ1h\ RX'w.w$Ƨ}kx[ y,?O1Aw΋⢵m~> _"o.WY>f4_dϕ7\0C*2TStr5YMqǬyWe~r H[c$7!靽 "w$Пk8Zp,- N!kDNoi=ZĹ!GS$[zpvH[l}ƔЧ0e6c6.ʢ{bu Hԧg;\6AXMoz<:'_}57V~ zLL*_p}?w%=miU 'U]Փ1F8Wqyi=p2ZdTqXOnF2 -hsxo|QrvYt"cnKokmFEEm"{$͗ 6c'+3o%m=bw;dVICET|ݏv= x=G3>)EK*Ig3/)H׏/ЪvB4jqzm~OfjZ_ƿ≋s4:{wtr{cǢO ^~{UW~;̙a8r!8.h<]j$㐀08xKgc{3.ы`ƹ懔bKGXb0/qمݕ5SnjS4"qCF&%I>JDK- Z%/wc,`P, y,9)ETAHy+2A`ͭg\F@B+%\*"\`a#TLjw[dA)Lޢ.1l>c]Ry;@J+b[Ϝ? e<)ilIJbrN;L4lڰ HxsbH]qH޿ߓrЀr?<8-q6w݃Fmr߭w]39phξ4aJ5\`}=ĕՇiKyʤ7eJm+zzWkzvߍ\̆!X6r l)kHj8k5y$b`n1nމz.NsKP$9AY\_ԧ3JFD B?"JpހB*Wp}rlI9[+<;o+5<ͩh=y sN7m ~LD;EAP?)&dcaӦk⡲%%'e.њo~x⡝A)uΖ[7i8񹱱 UWI2|fSD%~VOV)?/ުOq0-M&ܽٽN!d}#~'-f4^])SM Fّ= 5Ar/oqRvgm݀"o1F0hx E\ѡex>mh#6 (o?]mkQ hsǮGfݬ)9J{nzGF AD x]/\)~D/A[5T$mg3 |okt>H}WZN>J@̱t%m($P{,zS.խ[|6 _Uvl2&F/ÄO9nsH/lUS_0 uo2;0]'3g>;|Ixo(cȐS ᕸ 1CJfhׄN-#| I PITN,Ia;Mcb.p/ ۍAqvE>pD܀-,%$ϞhK 0q8:4QOkZRo⟦*Y  5$ZQP(DrX'i-+lYoR3{wF~Ea]O43" [n5)8L,4i8p>QQ`K]ݹywgM{!w+X+=V]iJ)Q^]]&Ca[0q0iƏJO93X,ʵ/vq )i,\mµD]y*bBnnn&~+mtɃ¥ Jai\dNBY:AcK4!AЏ.P$_$vI*g}1v6b 1" &y :C+% /&#̹_n]],r69 cGLy?_O ;WO;_˕JڦH Z¹r𮇞D  DxRAIE~ahOMlM,^ohmjeX=3=/e qC>br9Tz0pj(j31ɩ5gXz u: ܇ T[1Yvۂ4b,_Ԛ׮%*׃Ɍ&*ʏ'q?~G~Oc,B z~ߧofV-ejt_a_7`[m]Z3] oc5E62Vբ:?0=`l̺_;=wѬ>U릸u ĺNk%][솦Z6z+{)@IOzpm iբ=㹏'3Ѩڤ;ڟNn-~,J?t't*^EE`T:Xa+ &pu4RKTY{JUd1n5 _c{?G{{՗_zFCljgz.|~3J:a5#EZ XA# i55')rV͉45F$uLh!OΣrCD8>!c.%1Mh"ͫ5$Z . Ϸ◟ώ'1?UAg(JgP0XdW3'պV0Η~p];|(@o Ex/N>My6/,qΨi -4P,EOyތd^(,"AGz]c5`/6 dy ,L -#̫D:Z6&ᮻJڃ) 佢81>QޤD=N1st|Ujh[d1 fze_-AcI^a!NHoat>8Dw-4CKZ?qTwTRі40ޢhcW;o2. "r7('`D~Rͫ/ܽŏmx_|_>eUror-ShVgtI9>)OH ~IxpAN G>P|ڤRIKI>I(|=?;#8[Hs( k5f@Ah0>^T0<of"7G$꠻%wYS(;(z&R&=Z!Dqb)֘j9NmyT_'2,yf +V FHZQniyq28КdOsq`OjK+/)~/)xIK(^RKR(v.Hn5#ΖAQS*2(/!4(&eL2(hB#H6?;|wUt7 @)HG~(sY‡RV"D %[Ël^k65ݨVMӏWqz5[9ߔQn~t=|3%|X 7.B8f->$>?U5%EOW| L臛0̇(-3џ7_@Os-ѵ9L#*}3t 6l4q25W$='1ш1z%b a-OFsL{wņ{@TBuc!U>`IT BKKeZnDO8)"$N딫N딫:>(RB2*8 Nq` &ؖ/KrLP0ڜ$O8eM8/ؙ aJ:Lri)DSh#9謇FRqNG +)Ea7Oky:;Y IsLK S&) )eqȐ9n(S ` 6\g MMb눡6)[t6:(k-ܛGʓÕ*="F F<;k7Ji$E᳸f9iυV=~ݹjG-EAw:Glpwzc!B16a8r!8.h4 8$eHJ8"5/^:zb TrNOweqHz,f AҀn{zeBL[ۺF*=dUYʺ$2WInR/A2">7e/^|RwN`s8>F`µlW DM̙*`E!!TTFr@I9g*S 38]|q&L+#xǻ%,Q|Grq܈,g8qNmz nPDpڲG™< 9 8I;/.h y%K-ݼBX,H1~ÏL,-:qK&!d*oأ9p@οVW[ آfWj鹽m~wßٺŧEc١F=UpIz4^ډ lc[\lTC;TšQc}8~oM ;4[#Mtu喩I^WZg IArLԒnG'Z>ǚưs,FR-TߒUw :Zyv(4Ծy.kߨ;ڹH2(Ϝ̈f7_Jٞ{|ˠm껑)rŻSr%Ax&7|Lu-E\ZAgK<@K_c9]̀.Es;& 8*istrHqt}cuDpϿEL]#7J oY{t_Uyflv(rZ uؗ1P5׶JwJ-%CWJ! գ Oծ19*~5]35<:$|\=\jd{ s&.ă' #D6 ZMTt9Os?]ΛeTr6\YfEM*۲] *Q 76e8`Hono"EE4L+8$chZ([vy Sny~'7w?G'Zd2m>K~wvyQn}g`P!/=7nPФ.F`/袖JKAʗwb_^= 9Pπ#1+!J.T|Q[H!+K p!)IcQ] 0clf}|DE` - 43 &tUBb%j.*ja@(UAui~(|;bFb!0>d}AFILYHuV^}'I7G˓8z:PRZ!΄(\A;79\z3fB~PJV[`UN%H"M%rU.i%HKo98+j{n5JFԧO>50,+->59TSbl[90Ih gjq&1Ab䂂I"M{L so\^qk}x0.(mCipg9J[R@?O@\ 4JJq0;ʺd 8) ^Zc@LIZcTh4-kRiFhK0!͞lb$ B%0i ~WD$G㛰#+&$;tWp7vkL qLQ2pHz6ތ8]~y_ T>}/b(N3sx8yZ%Tr>Q{%lM(6J,N%OQr"T{ 4>'MI>WdrId*3PUEc(PWc0vSBF*ư1/ )͘,t vbNwIo;FqUH <\TjD(QFIaua8)5'm.dJ4ĊZ%V+جMi\|Zq[.LrU}P/J-NN} w߸{ޗ:qOeҧݿ'cwtL.^HRr?wՍ%ϋd$7}d畟KG[zh*>}ۘ[jO/pi=cO>rG# y&bSаn|ny7wK tR:m ҫ[M4ɦw4["Ax(61icӧw\:7n)6&nͦ#zT BL'1m=Sλ.z.,䍛hk"*Q&vws_ee/X!$gR̐uf*YIV*ShCrXn8po-KS]^Fia=& 8"ږ2i qP< v'bC"@ι9͐&#̭4рH( ;}dFTz'?v:F,!'ԏxOPXD c5/Dl'lQOC `Xo> v`O[Z5lnr|9Tiգcy]1hwbFt nx$+| F r >.HoMQk!ul <:y , V/ޱM}I򢴏f yMu^u^u^uMխMƕ+n|*eISREY'BYԒ%{ 39r,o~K~wvyQn}GcluYgGbj[jL Zf$tl[ 0@YWY*9cԵ־OZYN+&ƊWFk**#T kH(( f[q4 Kz5Y!wOfK L!f<⹛\Wux%)FҀ[{S!sS*+I mU0n M$ CKL26HòKfXN Q̢ko6簏oqa+N[ IbAYfgHSoάAxd!F@cB(FX!h54ioT6ObtDeQM ̹, ékHmI:ERwТĨHfo\WkI!!NIaЈWp8Kf! Fq}{.tP(f& QוJiAP\TWuU! $vU 帚58O.p{ <u{q$R;:zN!InIò s H!,Nk<:l(;5IU;3{VuMB.n_]Yʾ_STonº k;Ow5ί Ah!d*)$ts - ]c#5;N'Ŕ4찴x]%O/ )0)_d9Z5䊌+@d*?£0:G.xK %^&lsA`QB%. aE^@cov۹5.H^ ٪BmN ^񒘨ѭ ]!S(5R׬Αrť0s\tzԛ7`{.&n *c ͥ$Jҵ{TrU™UJ\Xl&-%l[m&q )yXe$VBQԞ=xUUsm CWLZuZN S35baYT -R[++JkCjs J&(EaT*n}pslҍtbi)@Ԝ(s;Q%r+RR-K[2\0hlU9UYrttLkJO?.;xhy~,|U'r<b!jw?m|~q*_.\Ȗ_ݾ?eLd̀w_>~2??\^{'[e~?Ӫ-;7+~zv͹'MO>zG@3.V 21q4SI &lHl].cT8HXBLn+^:`I؂$Z;ڥ>ÕC}S'ۘr90mlk5C?ELnB6۳i|Qϳ.?7Yy|qō$W}efV)1 fsb'v gyV.MXWa^Yt 46AbM5_I)n 3!Zw6LSDq }]Cv?Ȅ[T۾Kh6vPCn:  N:j#wnZ}&;8Ā@`?l P`1cb!;&Y8T*M[McxE:`SmY8|x |5ivLmĢDFC{wv`Y)z>cT r[ֲKx93ǢaOzv_srapL AicKIaEF)*Y*,7>[6B<2r)сy!'t*S> . _P)g=^Utc1((_ɭay)@mI2֪V.yeR {}=!P0W4p_ΐVF.~?\nơoˁ5T?IDpB<%l`L@h/ւljՔN^Ɂ5t](WJ=HjR ) Id)պ-֋4s>K+}w{# @1p(>r \\j&D4 d#X Wqj1e?Z[e*C,1+gٻ6rdW,{dXA,.Lff_f`gm9c$O%-eIFůu#Yu`sJ?q'@س{L[}<$aff{^i[6 a'ykw׌%eII+'jQ>ߕE\DE"B f čܵ ̅ fg6^R#Bk.; Dv$H˷yDa̮dI^3ȋJz(.ha4lbqj zNXb;)Jr ?8l{?ǃ9WӉ,h_VX(@ L*)~| ZKUy-I_Gf<[gaLwpT u~bן]$XE=|5b%w=+`vKX 3kFq5|>ܮ#?܎͙isos,(!.ZW0~p &E^-(p)\:;/~@Ią}-}> umC\dK0|Oy?7O/5 <#[>d]mNR~ 同HJ}}')1֋t*e%'5 hr;y7pS y<xe\gύ(IgU(5 !"LgvEth8Ԁ6 M[zD"e"*dM#wXbg)1<T!e\@;%5:Oh@^n o d1VޕP8t ;,sgџ;Y՟[@kz HjYgRPFg2)HV\ (lVƔ+ѿ|>ؿby5z{sC^\Lr2#g Gmz5"{:V-9(>ˬs4;A\Gy7c`J"qgFY d-E&]"gσQJ'eM!q$uD&% W XٓI Z>s; ~zwM$60+Pu3an˿dcbRܸG J}nRyf*(ݙKw߳1%V(sdLo+d" 4&n! C9ő= #{0%MٙBơ7 Ow~+(G%N(gYJТ()KOcD8cf(s}nv~ztݜ\^d'Owb'ohqމctǻehpݦƔ%щjv Ed2`s, mcOy$b<C9#+ά't:'h |ҢdҬ 0ĝ y}V9[m`P3Ǵ"RA_\ȼyuxKڻc(ar:C(Je0͑£+3Av on'3y>&JsY siLax&h돏IkPz_Y58^Lm9BoIgE #/2@pK*|i"`y;%hnr ÷kD5֝A&mIw\™u\ 0h c$f2Jq/$ʳjt40HYT4&_MSo[.jRQj ^er3옑[&lt bl%D4Hݻ|=2u;ɅX/ddLгᱱLrV׈Y:M1c}Nbf ʮWJbS%o.f$ qdIr̬Hp4Nd jM1cL4Y 9ȥX/J"ud쯙[ߩ圦B$syɄkhq"7r`*՚m$g%ZB7iXO1THX#-XIbfŬ\"bө&kzj+6"[("@`@Îg]8!ND&ZѲȴA"M#n^]ZaOĎ"\ uq,OS<ŧyeQX30:JM|0.g_MȍjT'..H,g 5y" B> dm 3ފ4+IbWKR|R Jfj0ݸTO՚E="ݴT"5Ev[`eOTJz \Ⱦ˞llV9IZԷkx]]d2eLpH|! l.Ʌig`=iQRrFҚ> 6:/n c 4VhcJPFJ=5=g [~1@ !^Q iU[isIB[7RX>I=dfT"|)s3 ~\Ž|lxcϕk#WsVz^wtCh`]|S ǙeJc 0dGm VjIC-0"i⺟B+r7س{#m1O+,20r L(r$4QBRz`J5\:fM裠Tm>9}ބX{ _n;'aD)e1 Wa96S"/є>>1{.,^zOR иvdJdZ6~zywތ'YH?|AF<鮃lXCY^wVdϛz.*PӳoIcw!7g9{֋͛Db&q霌LK8fPzh{o-?:"=Rby??vxFoO-r] εGoyB.5l-+05p&W8DGrg8V PW4߀ Q=W)wV/ֺnߓU~9}z`X+-aOjIMIMIHV-wK'|Ëswzb0+0Ernov~c)8>ѺXDA h-N~;z5zV iJePq`;͟9=_NV@.4yNm']Ro2 xCN)Pk1哕͆櫛 Ew>s+wS=M7GI c!w yX|J<-H`pyVg&5N81Ʃ t}I=:݀CJC]Mǩί7VL nqC Xh8m\$-(=IA>Lox 7Lf\NQ6cǏ"\_۶;XCkJѐEԌ~d^TX)ֳk39פnmPB*9TBʟݯۢR6]cMwO(1uWnOgWn>ηES>eߌ.{ W+n!4tήf=-hqM'W\B~ +*OtWd[+xSfYeYU[!z֊)ȡ%SYm_-uӊJS`3kX!KV E(7CA9i+ᔧ鄕\SMXy/n3ljY9L^|bR`}9-Orp&[d)3)#c2ւhtcq">FEA\6r#bC"`>,,e\4g؆dO29)Vbw="v)X,V=usÌ^dzkoA̠eoy#2$9E%ח+]wMZK IAR7eg*1()``r8rn8lhwd^p`=SY6/J'x:yz~?ψ4򌗧ޗL T5*OS} 6* ? P2Ԙ?Y%@W5zMsbB-ꓵ <.=2]4b8Mz4Jń7Jq)4O%gL5)lھZp7&/W2d:0) .5W(*0b*>,T脛:BB!vWdƝb3NmYjmo Z,ϑ<:m+AEP`4&$2sh:3 N;߭r4RL\gnή7V%\xun퇷o>ś˹}y4-j-l}~+uK i : FR^DĨMEuMl?xBڙITҵ4ʓ* T,VGNG`\99GF[G֓^Ķ@J Jei%,%!'N3{!JDRjE)|(]z[voHBʹ?2f~)a3AgBWPe3VU6"21Zv@U*.?HS!/JC ^e=@Wz<>!K4 IQ͸@ݑKՎ8zU/oNR&d'cʿzre򨳣bjƇ)XҠ$ә1SMb:)U=2>UƞsW)U$k5F%FN % F",HwSZDxsy79zaWjyvT9,lK8$Lv(p.gICńr<2GuCEm`l[5Whydͬ4WO}$fvPM#:~d烬]bE;R2AWC%`'./9;uxyE5b9+*aL'9 ͂CwRoqk(,PO$e W&ijnMs".*d}9e*u2\,FCjsD Ɨ27E+ILr͔>gCܬ|(IG wx5RFfqb|@RO򴶨\LĪ֏l3¼V[}#,۷C):egw<-쥐J^5iŌk6C_bَ:k^%(2RALhKr ؁<MIh2OxN+\DQU8;/@ +4*M{-bj)QrN"?ME gbb+f#qK*kA޲1Ā`8e4q%Jj ^Z=ܩЊ bbU-SľjL_)wA {djK.W _%)._d@ g M"E7U̕5,) *~CGE5#h| X(T3k$$D%2c*O4 TpH&p-+VUYbT& o׉3D"%HXS2 Be1;JP;nc鶋K=8|M ZFUvqql"& "ڸr,8K6wG,g#sQa]/rp Wo*iM%mŹk1*`(^5dX'; Uk?\sRlE 9]3+J9tʁ&j娎nr)Jc` UwnX)dXa`4mg {pNaTZ(9 I-s(z:T>>5 i ut7cMHT3Y0TR5rxI- IO ӥ}T6$5v۾pc4ɅSx `=2Nd\*>yboZ]~rJ!nͱ|B2ߡXv8Q A_~餒kkfm<&]v3#4M#2hD3a ό8 Bo2ubhNnGzωTo6O<NQcB#mZ3)A9MΪPTqZQTJ _  b| ) TڮV65}|sP+E 5_xkX7jGL9 `ER \xDq)Ggi.X4zg6vBM1 k\2O!c'c^HEhl͘JMhy7c)\ ;IxA!.@4dO$ڲ4ݹ(PJAv}Rĸs;uhI%ƛJ8:TXSւC<X 蔺_xw|R9߬Ez.߭NmatMv8 ׇ[N/IY;ehׅ]bY0> JW-d?nD2{3D2Gb}tШ5zU]e~NIWCJ>- =uiusZadɔε*5 *Dy`6ՑNsI~ѐ --2q!A,%b2$c6DiyNn#ۇkF{ǬDA_Iy-6r6hy)uk6h'DKoIWTPYλDgOGK-܈Hӥ@`5!WWqH4bwFEbK$>W@J^i0*|ϥ.lT^^_)]@ZUQ^SBS;0# gL^L", aϰvmO 5{vGnzdq++zwN"@r51'#DJ: TX*b#+`@.D R#+冪~w'9,nd]@↢7w~|H#> E [=տWwU>"P5"ՉBOVp2P>$$ As6:#HAѫQt[{k0v4n:raJ'𬖸R*#Hɓ>.j a <$C40#=VrGXmMz"L""ب W-B/2bǐhF(Ո96"O@#jPyGd&4R%Qpokkdt 8P(Y6y}Fi۫+ࣘ ^(ԩBS?$! F}ǁFwo!NFŭ w3s85>%FQoRzu:k G^w!O{ίb q2U#Ic0OkVk'hJ}VRmiƥ$&&Rܛ:EAd. |n9FVՆTeVh*gy7L[A扔B%IX2Z1HtE }0x6[-E ~;ղΘBJly8~۳< i~f9CƎzKNbt/9q;]?Mj8=ߟ>>Z8ńfI1=h 200QAY9 3ta#H*Y5a>S\R= :'TӨs'"=,kzVNéS(K]=2s2ЂOCC'? >CHe(F|q? ed:[{?;}3Sv7DZC*D컥=}tGǓ}c6%?<8VضD {1W=;7 ]=Gsgh k Ӭ_~dQyObbqx7?5(5R(ӕ Rd6HmljhLѕղzq4"8j^Dc Ehak)Xw'Ugp`Yn1-͞"۾:hW*+C.=TJ2DPmDg޽ /ݍ:Eˇum#~ӯFo*xxICndvcjzP}-Y= gXG>G BWxr</L&m*J㮰ޮhVxZﺷog_UCJqUC*Pt֗:>Nq3zL1GNݘQg-~h.^Lߊʺ veh+3u'6,JxD] p<|KpڕQyMV'-;1iqkui}mpdr'V 6oe|dTx_-4Tz;Nr#Γ;V\ջ㎧]ȸx8:RKY+ZrZGP5Ι:3&L PP[ʂ9M(2]~ԍińJ9A*A^F4x[F4*}jtA\ hM.pen @aR6+uCN[-uɻlm;rN1B xw(HܜŻë^uN,-e; U]`+UL'pD8)8;[T3&)]FA;egt8>oJZ;=K>"vPؔd=vl?-gq?n! MIyM}X^F~NV?4X8RR:6>DZ"w1NƷ1^w&[TJ|? YxSy5ԛ&[PɖɷmqJwƥӛ-E52ߘ[Ķ9 'X)x>.FI h)m׭y$ PT(eIUXo,?Σ4ݹ*^õm@,!uAlnP)bȏc&|W/?u.c>8Dǝ*z)/~q_ ;q&ǎz^)̴mjs@WܝZcwJf)]pL+4g f© M{CuC rUT >e~V sG*Ir|D=HIx{N7vIW\jkJ6rEhqxbNKΔG|vWsj;FT1%ɾ+p9)Ui܋ۺt@t{0׺Ԗ62=kzžRǏUd{y @9j_UYV_!ّ]Fd,_xe[RJhV0+QJSZ|+A6'#].KŐPFp:z& cD9CWh$NS(ov%**բn: e*R__(AVZdkePVBKYBUDZk\JrjID$!^sW` Emb2UM_H(7}JIoנ&t#G`ѧ,76#p߻v܇fń{;y{uNGCw|ut*w_5!#V_zK(P{㫧&q!JǏzrI~|wd:["h]R3wK_w03[hk4uHob2WPy4( '+@{TJŲmm_4 *޵hV0չ`EQ*.GcRFFIMɳ\-Oy !xd"۪4}[LZL?@am\(@hF2"əX@_9cUA{T8Oa݆j!Yg`3Rݨ0[Hp\_d{- ]Hᤣ?oH{Rc< 6)n _{B@B1,!8t nmjWg/aW:ݬ\ı%[fjg1+Ke_S\ԷǗo8&mD6>.ٟ?3 W]@J ~vٿJp:W*bj Ͳ.v9*Ab݉p]p1iur;/Ԉ䠋ށw@/eR\$^b_͚0 I rhDEnh%@ 򊤳>F:9y< xZkQ@nؕ){U2y%jbd7HYMBz5('LyuS4xapX!դ&⢊ _ +diNhɕ e$Rd΀&@bq@nKڟ/JAlzsnc9Nty`E(@JPFZ{ G'6A+Umqe.T mL,ă@ԸM&޼J Rkܴ/S%w4#"tc<7PږUldQa'*QOz[%^X[t!&[ )}' JGԐιi{pfE<#1Hg穐] sU^HC#lIsi(ibG:s-7woQj a @6g Ψ$kc"ZFKT0F 5꾜-d!vaDVq}6N?nw0. 11zב-L{ThQD-#QcUj(waj|Q6PiH4JDx-H8 \KPdX&=2?*7_pGT$W Zd`aFYx2xq h% E?Dƙh.GT874e ȆY\=ږ׉աYA)]SK4V;zhx-Gh(|- G- S ܅6q#◻.%NWe;۽$[. c1(-_[ט!90pQV =O7F~K3||Zw4x`QeЄgp0y 1ذL% Ifa7Bc)NTB PX- S lR`m;H zTD`i6!L0)K1 ΆQ sY%MK%nnڶVI+Z,K!|ֈP1"K[ETҢ*`ڥUjbHev]SDAz)^qS,M)E.e} \Zx x<0 [Dmout;Hnӭ.ӫ0M}#33rwrq b~1+%qfy?c3,`EnM]\{ )lܥ_#@R4/˳ϭklTx+!{5W:52##&(D%uibʞW pbřa䅸l/؀" P֚s4Hz:K?SrpS 11 `Cr{TU)dۆJUY2 )$d\(ęİ[aq,BB e: U)Sc,ܚ=F1TRgn&;$V(tWΖIS2Iɼ|(ʻK?Mo/f7:0CDoHް 1Rͅ!/i%^hahϖ?tʥ~{ hQ&ƏT#Ҍ֐}lEKZ[v{}F_I8Kv6ov[wgL9-*u5oE]b+zB؂Ggf>O/5P`cwF t5btwx<(znl&Kh-tϸrkӢsnm> 8ySZMl~[uepJrmasdaǘT̫smOU,l9+蚑{lo=i蚅L2h8H#K(z>{k7HpTVqat&oހp*&BFc҉U${'q<~?Vݹ=~YIkg#Ѧ ]DzS-7TttT GutT37{Tҡ,*yYtgLHwpvo> 1Ps ۻ$մ*z߹_ű;FA+yzK|ћ>bC -os3!+57s B|G(ѐp)N{)hPb":cn;<-e$j[_ nMH (bvvAb":cn'0;ݙvݚ.Q2uδvS-щv;Ra-dvkBB^T|3taIrќ}43g٧ϟM;b\n~_3A*R}ڲ>Ohr&hFH!n]\l-~GpwTfYcLoϝ%d҆-u[4٥Icc{pkc-af,@6g Wipn7y ^T3?No' HuȗeII=xB)xR ='U@S:8diX1jx'm{6+5y>`k/X%mKϖ.ԭ-/ެjg0ҖP?\dFQGkмx?WW:Q7Yr-oN KG1]<W65> %Ūicm'd : I'$(XJ|y'5CdbZ(+uʔVVdX)dfscxJEZZgSfa<{aQ)Ms@L-Z3H8|>2Eb9F@;b$FŐDzB gA{5qߺ[f7~|fqs=dfqrϢ[UG?<ԉ_}l`|>\]f-Ff[Ou3ޘŨXjf} %r<פ 3͗N@cKy,K^}}9[iPv=|p9u#E1xQoEE{8/f_Fӛo=Q?r(|u?Áynd`0GaOx^c9W+gu 㭒(^l*M&ADZ^ΨfKto? =X[ξm%Mj?#-6 #FgK`NPYΊv `e_M"U)I3 \Jb'O2(Z`kJ34ŤM+HGa4stA k0BXqX&*bP±If4!!fDR8V)r h1$iqˤÁLj^RI\&X(wQ(&H -<~OY Q {i1ZVApHW=E\] Scbd-t6"qRPk؈QbEb \8uo/kփtY,oo?&9 kPEސ7 KyOf.˟MP1BtT-sӏlUxTm}APSʁ{HݔY*O@|(K-= 10 Tlb$2 g_֋H۴5L-%(D*kQI#j3T`Fwcbhvk|Jq"k8wg< { \0R"x+;^G.-mgmy`0 ¥ fUCdҳƱS ңdB_Nf?= lWfu9 $^ {$\22a`i KlbɞIYLU"Mfpwna 3XU%}[(6KGY$#`d6wL牘./L1OQ'/1NgXSb%WgRm։XĂ*}loGh*6R}&T#0>T͵kSevAh8 !myգ@解"J#{bS'0Ł.!1Zp{UkTUtQ}Ǡb<-̫h4 nW|~ƭgȱM\Y7ZY>DsXvG1JWlUơu 2;_JTJGι0c|Z?k|b3>)49V㞂I_LڶQ2kYL̓D3oƼ鉙7= *;8UC`} 8}e}WBy'*uݱaq 鷩< 1\4al #@a)ic1LicN吤:Ėg8 l ޏ:`v s]+s?v4+JBᲩ/Tmx% !Mi.4\G0"OK Ojی ˇR 'ûg]1yS^Q@Y$g{:BY,ch``}LJҝKmdnnWmj6qf '.epc1xb !|6}eo^WNmCr#7Euj?';`{Bx%L;L$+gliǁMP,FKc_5 M wݏn$kq% uå(K@KhP$qosVk; 0kPR ָ,aʝ(3)c ef-PG632O,QwX6ռ6՗ʬ/+\\TjU;eaM F3# /=P&J]V]ga}S/\os$9XO?.O7vk0JE!]*M!~/ay_ټ8I&1O`;~K;Hc+f0HzL#S&x2~A}`Y&AЂ/N~DPwGN~b ރv!Ѐmd['Bl=O͒N1u>D3Wv99flqWGjmf>~rYV[vܛF,̧re3 o䄾 7͗|+}{z3kq_RܯK١[/~(7wgnp*!Us*IBUzǪ~5M B块3_wUg$?!8 Ob3|Ġ{ב3 T+Z)b ^K ħ+^&4O7Rߨbu,*β Bӧ47˫y·Ϟ2ʤG`b>[})|n)o@q׀br<ÊI"ړ^{k0$SPUdPfѠfѐƧJ>VSAIBf$rySY};.ׅ)ArY<-lb:ǁփ۾R'QL9g۫$á ;:Ԓ7z%1SK>r֒Is@&5ck'Evpp`h )'w<҇EH^דK<~(~L},~CWΏ d t˘wH)d5+( d@N4}.@Q em8 BWk ,%Е $%R2RWYaQDvWKDH5{$udSi),6UA *fPdɋ/E{ cV˘+Պ ;cWļ5Sc#]Q#XUvƑ] r8⣱ ;^@l΢*dhIV<آhɦ ҩoW7lzj+ݒOd ~={oL]):; [z7,YxBdQ>l|bKhw w/W#1eh*4FjOJνdV([IC^&锷Lu:n+oӺ%b:ny$V[~1ԺeАI:ٺ#qbbe&`gKEM#̔7`]0U'P:_ۨxr$J:{&G8[-]VC[@!i« ,M4U,ĵUZ?裟5OkN'=IbICqf|' !#y_T?'I]!mP DAEXY0iᢤFTʼnwqb~k׺ʨqҎ -%g@Z2K-1,n4wnq_y4߬W!\'#;200c~]-e+;"!Lwȧ== fi|57{ϝ2Yѻk|" c܎Փokf25[PSnW"@Z9T>Em=w#pD  fHN8pD: Kj>.Aٻ߸þ/ݾ/}{vKc=vµUl <0Q(4529% ^R_{I}yIB>C#:%Us+ 8w`xG`MzRDuX!Q#X hb ,VuMLcɭe.odY"JbW/H*"TVV1!;Mx]:ʛjaAXg~,KZCn=m~ޣ_5??>DM6fwζyۅi~ulXpS fy{fbZ}s.dWL`J-W߯ngn|scۻTg RL`YcC70{(!@KR |>8px0fk˸fl_ܠbC!*šV @,`i-J OcʐC!z%25A{2L)tNuzX:S=L?}@z8Hi#sc ~6:m=ZFRPlOq}cbk[fA(Oc D݈1d/҈VϲO, 7mXy˚:s]T9(A% )p!c;vq%{F%笲q(5(A)\5Z)pT LK $4)TDA0\:zoA4C2pE3F%5R0ZWA-ԀC5ۉGI"\j=Gyp2z_RIN @S\>?gI?0-mG2@ONi`Aտxf(.Prt18Ӷ~9p႐ơў= s9T'?n .:Y<ҢP>-i;".*rcEFP4 {Cb"E ~K؝}CgsPw{EHzcqjS<59e\wLr鄢 A(&L[^4!ф)"z]UZ zKV%ޒ5>%iޚm3FA-9 .L4 bf%-m{v_MNU6-]]žd w=]Np@9:HWR!K2=[lqUR}fZm%ʀdٞEO]|zf1[߭4ļřfgY{9+~İGCm6x"_hQ$jw RԈHC E+RvWWU׫1$g`sB.#R(423װT*f3fQَSD G7+ueZ?*x-6.7Vw8 cCvZӃOc>8*rs(˙(AlHd\"CЊOt`ϵ8:Tpw4gzP_3sm!$:d\2Q($1{uGCfyW&1P<>4{G4Ō=4Fy%$| O c22P=Zw%%zYRx(+8dxDoE˶%1, Wc!rQQ*~-Xb7/=VZ /uZf7],xsZ$TxnLd|UN/R-Sfvu;l&=]N}ju4;]dT>:#x!{b}x쪔ߌ)YvkfjW?CY!< &O Yg՟N\$s4E<eͳM9 !Biΐ ;2GMZbqH[e#+b"5ٿz>>CeSAqJtl.f I2iV`v[}wf].&x)9,骗;}ERI}_wsqA,l9 RAC9I|rmc&% ^]m:Q#m:Μ+0њ6:󆞇1RÆ;Qm0"jYڭ a&I,9aRiꕥIF:m<G rp!`¤zuՒ;  7 c zaB&;"RxKGӌT9ZF{f81lCAzM %*£M0*uN= o>H-C"=_/M~EX>"<>ǻ/E-^}bwP|s|3Ѽ>Ow=кly>y/CWLg[=?͇Cx> 6=Zp1:H +ŧIQ>ޡwì9aB l܂  NUQʟJ] 3l;q> 'aw5g&3%%;MavHM:~pIn ɝA `,|Yo`ݙ5e!wKE1+_fx 8m욇-8r8 ܋{HOQ &w煛0e:#cʆӂ>W8מ\v胉7׵Qgxo3Bi-ib)9jv9&Du5T`\N]6_Թ6)؉mh4 cR>S^Yl &N4Y)HNeS0Xh3>HȘ3JE2P/2T揇+|:Sn,݇Jzu9.ע^ol-1 qrtn%9Î1+'buF(n񂲖-ʑC!g\P#^|X<5˂;Vj2YKfpʚ ۿT$؍,Az o-Vт $!HgWv&7Q7V@rvu[7;R%ԨyQ3_BcJ&@`RSMm w*-V\؄(#k MfWa&)ݨOU|Gg1QќqrE-Q8"G+ɽ1֨,Ps,AA+ $Q+rXSfg`'? Pb5A#vu8 v4×1 (vC[ՁYKaݘmpʻz:"$0fX{戵.䎪)j¢2AdiC.tNTςcq AV@vy a*83,[jY Hؤ&k 0@ff:_oO& 徘Rv_ #\N$Z[Noh_[AXem)nP2=26XQԵAJv_fSq xv#&/kpYSoFCLk$i2L`Br<%n6s͏_W2ƴv8ʂx+ INfNq.Kcp/hwl I* v8b@II)!`cR!݃lAa% %NJ. \Gq# ud<Mά'Q۠kd_⽜.T_jcH5&Cڇ$(:+MҕJt,yGd"9s4 JYU`ܑ@=˽LVnv| *$έs^TyYx[%ՙiuB'irom2=jm9ELX1Ǜލh:8{gIKjQPsE ƣ64!<:*I\Ht]Gy)d̃iVIp".`yI;,3ĦC󨉔m$H|1 f3PD4W˞wj 5-9ww[pbLK;ʏoGRj_]Ҏo[K 'vGBԦ)tU(DPIW:kR$Z&hhDT2oG>iY\Z#>-H*F4]WrV2F&-H]m\ [uDf&Qhx 4 d41v&ڜiFz KJ"}0\1Cġ  "!:j@yg `,gŭj1`j WaRE-㞩Ȕ3+5s( RL8#A1,ȼ \h.[*;峌 Btpd[ Ny3JZ7V.v}ϒ6]=蠯„nT`ȈP_/@ ]DYZP*E'f&!wNb85!ܲfp+F=>WҢ^dFi]qVOVu73sv?\֫~ G$^w+SG^|v`*|3xl}8O'3S;:MM> w =ե7=ַJN:O=/g} Uo "|}9zSjVLj> Ih0?=WתlB9]7ٯs|?S<}8 @`f8M+:{zeg̙< l5蠠!?{WƑY7l1?gHrIH9^IKf̝~ -Vl4wxTF"G7v*d4]?wx'p/[W̓M.qv^9NVhx8}bt MQ._m{}'׎ϗ>ѴyH"$P _fY>Y>ct_Mnje OCu~wx Nw/c;EhG9;5d^YR}mgaS1ޞe ZݓA;E>w.)Rf"àW% }%lKNMv>nf}h8:>-ORDrֿɸ8; o~Ȇ4QA)]MF{ c~8߼f {~5UHyi͜?h;OwE5ey7~Ξ?첦<{6joe?uyzpa~^3>Mٯ$.ع'i4TJ6m6H ʢis=|f-pj*L9vYþy,\3pVN]ͬ\IcskR|}EcΡ}{IvN(wYVRzJ_z9dWPҗsԚib2P1pEe7Xc-.pǍIǍz߀G^[H/'.kzqqo?OOMg?adv!nu۝鮽ܹZW8zWft$y{wPا֗^V[Vqݤf֩Mn[W>GAT\w{"!i%ȯޗ/.R;w <^~.Iw=`6ʒtkq0eqAoNejow?M>7OaJύn}}/A)^\_/卅`؃]܆1S߾j;w0mt˖B1sp]T9G9[ot5!zUld;oJ Ŷi~yw>5ON^a>WL\Tԍ]tOzj*΋a8+Nۤ&Q|\9*Ogo@i.\lagˍ! V4WԮSʚRDOcisF"Q7W!tUq3Q)9e(Ǹ\I8cJ+{X`@2(  >bTT* e-kQ(J>A|-"DQ,PXTgkhau R(GU9Wȿf"|1%[ InG H騩DȂ644dC*UK4!?FF4FV9e)ZF,A$DF*(Ĝ=AfHS""-18HchN  :Ed 6 V)ھ`ꨜG`JURZ*B2 c=B#弓FEIeHi.΂`\2#.0a0j`#x*QZ1,:D 4*#4 1fiB|V0&5[Q Oyl:jl >t9f&iTm1#AED-7wnQYVqG9 <6OS $akڙTzI'B"pM̘&1>E 1sH4 ́p{*$,zRA׹ 9 ^!Иjպ'Z`A#0WTxɃNPc$qc#ٍ;5j_]N0ݴ4zi9- f="VRF:D0@adP9pt(,B 54AP7$DahE*0py"kUjhx-n^G\xPPr&\H&0d\,drW`i"dAAH8r~1!UjhxQ^FO<с&KEҖJ /hxV׈T5&'i hxįRCkVuFcsj+L( LFHpo>$Vıc%,7 Z[_FoP:8sh@eY/ 53v9ݼ &gs-5h+]W9g@ $J Mϐ& %B@ r{sp7ˉ_Fi=@سcrFU! e"-/#~ #4G,L:LP`ZPՊ |Czjhxps61F fs/מ?|;uB'G!J:4g4W"G,ʛs.XHj XߠH 8(i4V) 8[*FE.U&1mJ'rxLWTzBL" )!5U & Xo-14:̸! .JnVcn:NtQc~t jI \>j}" r9e .91I}0!t*/ZXaFAp9%J'(eb f i"-6'8KsI nh$Rx6<+Ls )yfC% չ)S7әƥF:@\GCC`]tGx¹gJ"Z\F$L'_SU{{F.9J}p>%AEa@ &l'"FH~%8V$2Xzy$t p/os/_I`)rJoopĸS(9L❍2:e5(|mGhT,;ZKٝ<ҧa]D& QȨyD{njt#iE iM+\HPLUYJ Br,SLv{f]Sb/wV)\ShM9x࣫6kT=}3yA/gbm|͐vMC^ ֕0uF7sgr;J)lnʜ*7kJ?lc ֯V9<ռf3 q /nF}fCAPaZۺ_'6 2qm6v٩lΦTxRDY5}H=(%! ӍӸuH\xOoUtҩ{F7`{һ?,[0ԓg/3G9 6w?<yWë/}?ޝL|;{;R`wff+s_Gh(+7,G쁾-$p7{`d|>jy<|?ca7g_H{?nJ ے WaDf޿͏QW?kpRY(r1R҄=SLj"$E=m,62=5ۏEg{eL!Ζz{ۑ r򬿘E 3$f{b*ױ|4!:: #/i St:z#_O0"afnLoqge~J6;EA [rpXiƽ7 : GCGer ")uH?bCt{!>e6kc-S#2g 9Dې2s)־d1~8s)yg·8x]S|.? sA'iΞ>zÃS~HLRZ{{+J~s*XLXVM7q-HZ@YO(o(9tv%V` [d_WJHn9_"xARV@>0\&d]2+J-Z$t[^yj;oqvogNѶ`rUk,_Y6.[WJI+j]&rjOʔ+U77M yfћjV {V X?i#~fKm+y7XoplŔu6h jaow!P=S,N7ayx;9z0: j(rfM~ʓjـࡶu iErHl/u ډG%,1h4K"|4Q!q$dćγs-FrX%MT%׶WtZd.$bGyP%ʤEr)zDaxOQ6@Ȧb*CO{4SMz{jbg-Kd1!Ld $Z~- "Δ*2(o} ,GTPPɊvR}eR U! &u1+h<Q:'C}Ud݌8dVX+џGem:t;`7YK0gOOќVsөO;&ܫ6UE!hTߘ DYs`g̈́Լ(}T^:õ7I~N+q*z02pz{Ǭ 9{(0Š09(0aAl1e-E+d9  scZ`y`B D#`EHfnt9Px&#s鵵Ari"RD3٤t021.2!5ybٖ)p Xd+: <\OeYp l`݅Sp&dM7)MO=B" <SQtn_LQkhjK^؛ [HfYܗX.pK3x,70 L :ý*uݘGD֢YĚIH&!F¨aj8պ3w[)dfE].Q&KPR(ăMǃ0 / X~ܘ9p)V(-La,TT 2wD0ȴf]A-Wl4E@dt2~s&%ż6ZC_jJR[aYz3-KAHUTd,F#n33xY a}q6h+i!qЄDbdeI!+ ysP"Aj YWֿ>X$t X0TAW7i9ߞQܬf8{ޣˡ{>zdV|?j/݄ۅVk$چU 栵 -'oZmZ0 P U _)ZQsZ*FܭB9oCK.njҷo5oenj7hCnhbhDv݈L7TW4bh.u lʬ)uN.:jƱE|۫{iñCM<7aM8;$v.5ӉgZPKX'a}x*?56"qΊʖdP&J٨[O"|ڶW"Y/XlM8R$TpZ$x[K"b-Wq O* ' ^$wrwI y4>Hvx2q~hF{~#??F{^OZ2r=v?~|*qI&ey[eteQ5PhIz1-n[xA>ô I0~.QZf<82EGN qedD̪ fmz`.Z3׊p%;1otk@?܆i4X#b\4pC EcATF6^~+U^2Re"e^%mfNN- ,OfdE4xAi`/tK1+$&'7~=аXU竃!zz[%%8/)ƥT~O>~ªJ8S\ &1=uCAF84 逈[!&z1GF O5Y'#̈FMm#j~NbWX*g2~`烈vCeb /X iE0ϷブhB?{Fnʄ٬x?.Z[N\-$g/v𔘥H.ZQ_coYjgʜth}3ҡ_ӈC &ߎ,>i:N)Kb>Pg( 9N6׊\S}L;Z߫pkq°vtU,[Cɪ%I$`n523,=1Jpl>!DSla@K(ն0RH&+e-mebѢ4N 5"S ;52}ՁFC 5`B\bnALcCr16rR3ƕ G<1Z|w.6B>إ* n |v8_֯{7g.3^axyt%4\O|MoaznOkqamLa A).+M9sNxyzxTxPʂ!._F=߼ ޞ/gN}qrz=ճ诏J~1zmUo ųק/8|}WbUe;s_TC$;z}uO?1;=jX^Vm5/bc=.@ゟ{_OuD}t$T[j:0}sK+ZY+{Rj=MwS 桌r]V912e %-e>;>b5W6ULVN2/7YVU7e/X'+i`?Y~j`Jا7OnZ"b~i|w0<ͿK[hΛqM EHV* qT f`WdŻ}T{G5on^e7X qQQ7o1?s(*F f0. ۰?:WJyma 4.ὴU? /gNFh` ^]]ݟkr*5JWmsUDn"R|Uשr3Qt3-aAK.>/zyr^IHe4Zgh&2-)E(B1Pv,dJvV횫|`_i&O*vZ#bºYZet:#+z.A^V| ஊڶ&]?޵J]`,UM`ք޽ G9Sߵnz/lW:"{_[Pq2%|Aou7&(4V1!Ď95PwFuKgt\rr& 5 sZ] sʢ#1 Q)GFEfe{vkE jE~.}T)ŵPQ `ⷳmv!`1Vt {JKe"d]2x#(]bRuj-5vMS;Sֽk M-aM46C;,\\P `ns\& 4n1-!hmJ[R~_q[(u~y2V] CaUF!)h<6%ѧJ Ҝc -U #IœlBnK tܭ4V Flp+YivtxNQ Hyy7EN13)UXP6M}i%HHڥcwtpx[9]#q \9)4rj]{4avQVɬ^ukN7 R`<43rGS&rֺwgyhFcG9G諣ȐSia 0]T%q* \%ndvٖFN( _vF@RAXDDh>g)Ytt=}]>T EqE7HPUF?g\0z;Fٗzo )):BY),\ݡMI 6| .]jbײITrIa8p"uKO@&F͕>:Sna%/}w3y3ƣN7Np_ey8uӚhV^uҳTqL\yoLhF*)p0O  ( !A_tE6PRaŭjS-a\T}!Z<ظb{4 jm 2%tPc?&GQV-Edv\6Rmۿ#g"~lgZ}&^aOMܰE",XA݈8p>t% m` 1&Cݪ 6bhT^Vm5āXʷxvĺ1Q zU߁N@,5q vāXDs88Ɓ} Dgkp <Ŗ[Sl9_-qS nLq]".d@8/WVYš^nmH5 0߀Y a!Zv5)kjťBs# nҧłp+g*L)y}r=PA=H̥8?!6PDmP(i0z'sT)T8zǃW>u p5szRηZ!<!=dV^?,1+Xڰ/-^ыo._Ȓ4Dt<>g7lIs\66M 9< .S.kx c^F tl_:d+.%ь.xy ²`@LLizgOmDH("b[U5YVFyU&1w~LկwO jc?ݺnC0f 6;#fg%opv#ٺђh9x֣#jMcv`??_qiЮ$j?z"R&4ݺ8 wOcEWtMQHcF8| 'P*t{,GG&)O7 fE$=Wh-L:dzT|EE*p*,q5͞Y+_|eN_u/ G)A(%٣zz掶bͿ!׀?d/iTG)(Q4z˘z!4ģxi nq s GZp7BDzD,lliB0OyNJRe-NBQ=GŘ(Fz,(jAG Y4G[")¹E'M]7~םu}-Z/ﺜVw]Vܻn13웢<,Wx^q`-)VUD_)qef dqLN/5L^6.\rzC\i]lrOC#p0DrnS>%><&ljj@)j<=_\[_(@W4"} %\vQh~[oƈ9x{pz(pWp,~ľKRvP(:?&E_ C$OLЧ&E!,g!,q z1&h)Ʋ\1>i⎃:2-SdUT3R}j$Hqm3ؿڞg7nM ;ShyTT9-NJ)zYmYvf.\ -If9}sEk3,-[Wx{' n8G֜^!GfuTuu1FdhWf"Ow͊_[㗿5D*ƢM:; 9_t\҅;\!Fhg_ǔܞbrfֿݰ8x5mJ^S/U)/U)/U)]rKRRRr)ukSnt -Sr 7]}~K*k.u*/u*שVoK7SraOT,\ӷKKKʗjyzJ)vDV!2]1 QR""cUn¡TJQ.܋5b}C\ԗ5]kŚ6hF%iݱroޚ;i?Ś{5wŚ{.Oؐ/b}S\t斶Xs/܋5nEU@g_ AqW9S1om!"o(kEJοx%܊Kz>%|5<~x9ʗhts= tusP J8o촐߯a~\~ 5a)>ksbGI*zx"4 )oSGq_8Om]3}+VRűvq ߱pg\دV.n!VgudP=z^ tڞS۱RvϴP;POi[k|oڊjgZigL8.δd]3t}4=aV4RRʨh*fA1ȘL_Fx6W+F^忄-6MM)nVW8v]sfmqyBVՠkbҘҕ͆0k0cEH+d̋Xt  <$3=oYwIB!J)dg\^GP AQ=7RI0py1JE2P/2TV)7ZɼM|Uv-vʂOڲҒn6\QV,*񞴤38ÅCLtnĪY'f9>!Qܾ8CSHUĚZ&.؎Klw38BJbڽb"W9)!|h%%4ޭ>]3ٕ> 廧Uc(*-Ulr=e}xi/Erm M$O .YYzZ:ϗȥ3e҇p|-yhn.`Lcndz; 0կ jx?|''}GGs73)UXpD` kS}^S"-D A;ϭ"<:~?<ͦ u5~?࿹pc<H!QR Jq|!0HkU)RJϭ ||}Wa@]+Rn$AE@+b0QU)5R[?DŽ3-'kLpaid!d]A_U9jH ۠M/ 9 yiXPiTðrK,@TXHtPC:y[ize<mǻz&ֲG=y4ݲZCƖR\'>>O:f c qP)zލl# }eԀF伃i [7<Zw]iwkjŝZQku35Rkݢ8CZ`V.tKfv<Ƣ9㭨87c1ru9g4mE Ԃ{Qk*-ZiB`]*DG\#-w4-Hw̡[8e ݺMW8VxHQ?o]7橔+uu'p$J WA(oU{E}2@8W>[1cb ѳFw kWa@J! ãM;PZ Kvl1\Il9$uPF#v4-Cik! vlX#_5,NW3(N:֫T/һ$V%J` bùK"$KQׯ&|}sOT/VP_d<xǟ#eyU WhX%pE[zIHyJ8F,M,# :h*5 B#a!NQ w\8oH 4Thc$LY ,$yЧE*Fo8ZQݝ<Y!=A gEL%R))!KA-1Д;Ki&'&bp V1B=m3)vDV!2]1J GI5`2˥"URZavGNF,^5#pt@vT-5)4 Ô8dZig ZFRNN|ۊ~F1-;ǃtPl{R!qfm$ k ZIcόq #5&~]8];Jd*:!NT krG؁*$/W.uHuĉ@EK<T@yG IK ;PG1vbCm4Iy\:t\[m@HE0ٞB޳pāQNRSj-(Cmٚ+Έnm8FJ{h#HϵI i̼N d-(0 ;υ̷HYȇjfjdY^q˸vgƦZ, ˍmnnIE۫^OGa>~lf8J@BSI>F,%p $<2# '-GQF!`_.D=9*sQ8A-!F^6 1i9.I-=aO?( 1 rO1!2$O]d5}ȼF̒\݃0igi]s$G;tՒ_㻻TymeWg6WS 3ޡElLu~HPۘU? Q"( &Lrݫ۵SsW޼t\ݿ\$)LXԞ)X""pΌ =AR3ݙ^/|](jߖ)^v" b]؄z2a#&&Pl%uJpMUHdgpOC" 0q[N1rl<7@rı(XBQRqAN^"X- 'a#jJ" m`R;Ki`CgsEŨ|3:Ԣ+)cisku}r08,\Z݄@7D*5$Syye!a&}VutzG'ZVDŽRJ~#ܱC=َGB;*)CU9XsXyoS6r^H!yH1Rj~ˣBm8t@x~/qutxQbQe#,RT럲%Anĺ.Br?ܤl6$qp*5il`.`NPUs6$[(_(6@pgCrm]/+y] kdYfw ҷP?̧n@,e2Y3 4Ip`F/S6 vabVub?}]GGqz;)clͣE.yOkR!`>'ŅSJDiQ: qEe LTFcQ)#*wsWڛȹX{-!DdlΞEM&P{AALbYZ2#I+ԻHZb k ZC\%e,{{ {g&k_@L8Ak b| ˄ojtocYpV] LLg =f7{FKji>S.ՈuQyl_jNkFB'Pt?\n0avFHJM#Y6ożN؄}V;YjCz˰C7j>,kg+5go"~B[z&"'zMka}5NUyw?c= VJ:oüAقpȿ9v)Դo*JtPP }<&ƛ 1յIKtP`1!7kF9c'PNڋ_TD>u:8}Vvv%'hU1gis|^Qe!f>R`pIH'JǴ^ {ss=BD!a|k֭ /Ow0_ʺUF7;ٸ6qfxx߽]_꣚67/p[cjtz6߮ X?tT;q*<1TMODTplx0W?#7_n_>`O\ra_^y}t"ZVǦgOI)+w̥ϧ lڌv8[Scx~m=Jјiss$0|}12TTZn2!Ԅj}|1BPwzRrD4v){<;7Ch}ڬ} uMDgFg'y˚($mqUcm@,"iqX`)V:"ztE1U3y#ܽއ竄WgU٦QG A򂶄jxAdnW L6{>.Ku-gPDjrU!"΃zÏ;;86flt頋uoctm29\:%o6r y2lHR,7x4e%zTǩd2b{Er9lqq^CybF1exOP\Y_ҭ^q˸v{Ʀlz7PlO: cbJ(+u+<:OtA=.+) y0^:EaQ!.r+"uhD3P+$/7lFfjU}h0ۜ=+.A1.-޳/\(?/SkהHwI0|~9GxYsft\b™Ö*-< ‹gdl-_,A ϖ^.KߪQ[PxiiciTVXYW1Ag`VqeYyq۬PGL߮1۬Px%+ |ǀoB 1E0B 0m@A,AQY8mV(:u~,2ݲ.KxXp W3W,J`Til۲4`cM@ Q =2 \ւfAc &&_@P:u!:+$nuӹ~r6z_ο'Wͫ<ed JF/{~;"`AS!Q{FX+EךX4 ::#f(\Y|Y##CLB2b*Y2:  "- `?+ĈEJQcA+84`I"='K*,Cn ۂN2L.2^0 )dZZXǻ  `d0RsDI/58iVTYҕyL/Yۈ X6gr s1B)Fdi٭a!+ cOL{ML 爹mn[䩑^2qH٬+%8wQ,tK.r>Çd=߅ּ]~|k>28XͲM oA}8i`Ys9ҺkMԛI)ZC MͭIOAS3{_N1Em"}#fKn@wjv:C?y677te^Qih)"=͔L%QQ"223PP5#2\e E:MMׁbyDQ**d@*(ncHI. SNLsM(84m0;bɓ!&ٸh[MSצV$bDO3֫@ 5`?I8[eQQLa}BE ϸAvo2a)@pW U"ZU2"(umk\J}=5 cb^FᕎnJCY_F&@[H;@*q<܋[+ "-㔦aQ8,ټ* TkdI4B3-DW@VuQ2#)&%H%@Zl)rr(\# (5(5ˆmiS\}NDlW[ZKon(obG?3Q5FџWӒ*+K Aq7ցUȼ -R T{~KR Dy1oJ?L5CΔ^\j6@9KU,zu6bd8i湚)TsU,+8HsR&;bf^Q+vi0zE{I#^^Fy/+Ko0IiV ߁Brjiwp}15mO|g ?N&!Mʆ{s&k1 Voq8 (nGF^4Ըzzz㗻sۇ~C݊}~tW-(q12G2sțOxFiT>hOz _(>"{O)5.!֍-(n1ox%FYZp2JAtRZ1l,ˆm'w|W&+&"*^DD<2GՐKVB%O)5 cQuyΡ{sMl`) l+jTeɢYʧ(g }b!pGq5s7f5.QTfJ(\ߚ5*\1$dvĨi(<$h`_FpxaRxgl4|`?⚒>G D z*8`ň)Qe'o 7(ysQ(&&':b 4Ο9×ub'OEh;.:^j78*^jE'}uUJ%cQ#Aտdq78xf9G3qLLqAdSۍ.Ne?q~>q^m*c 5L(~xhdEb9:tɢQ9h98#n\˿[_u$}¢pyY7*HE#s6_SoA,9cIPDLV./I/pl4m=?GAx24Lph]&Y@㭃b~u ZGwTS4hA|JFJ~ۼUeAǠ 4 9Oq{\Z! (eEeC(Gl`hqfXCiӈDPwe4A ;OWmS6lTbkğ2>ӱf]c5E$珒h`'&^nD$}0W߹ $:Rx\)[*jd$::1?Q _FݡeVͶ if#457n~K񓲒֘4TSEHfa7/Jƀ<2GՐPL[MP%?Qԟ,u%27 #[%Rԗ * D!}@HU o2DMZB$h U2H"6.jbi .nco3154[㮀kS`>x g@u.Zzd!'HT|6JJQ8gg!QCjDK H_n yQ)'&- \RVt0YFY1I_}^< s*fDAGTrcڷ8fpe9Q$JMySB+!MY£:L$v eD:3_>bXCC)L Uk5u_KM+a,aU<>^*%@dntq/ckMah] BJ毮.$ Kj&kRA{.+pD&I-'NOQ'z=-m-?tWxw}Y|  ߇ͶCzݯ͵}׭o{3]ck1!1eJN .5LWr?MM>AUJ׮uZ-1XN*-e=VEU-HwʊIڝn"XEVLrL IdC/[4!+8Jz֋^|Ȋ^1zywHPuO{|<tblFxIxn&HAF 36^ѓ-_&zomٗ1{[eh7Wagtnb6"=YChE+99$8/ʶ!2)kAt˽$`Bm)V,tWژ.{`h"CΚG{ HgkFpZe@t`E!xd[M#Iz0p07 IPh0ќ{~L@ZlEE 3AYՔ:'ZYXP  r6nHLJL7ƅ[xe/u::\:}xfk#fJR$T:0;&Z061'B+'P; _9_/Gy=Ksxw 6GJp#<:Ӥل醹#TY6i% , ͈ZabZ[k8fAay8N.b#ZȳֽEhr{i|‡s|J-5J(0Mwz޶ c -Un*dCEqHpP+E8KV@(,pi""=nEHն~!vKN:XH;Zd+A@rS@yt!h4 m88Dnq^oP`BJci@ Ʊ*xchoΠmׂh̖trAqU1쁚i0΂j#T64O6;b(ZKA[ޜ֙[@qz(Q ((]> 0 C>@E3R /׌AO$q'i paJC Be1f Ep+" )4R#2n[W'}{Xȅpwcd  Vp `[EoR[ ʠ 8 ڂB`,XBQczذ|vodǘzFkiMfs6V+ А*1=i孢0})oU/+;32 ١dȊQ2c_YvQ8gU`6 񛴙ZxIVJ'H oehk5Hh`4XQ9b%qT3=X7 ɗX݆CpbfYO7AV)&QJTWǸ`g̣8<3˵oEٍ.Nej3+U({۶ɒ-i'ZJ.:dj[!m< ke!Zd V:j\d'‰ ~E BO D55(zK̃:~vY77tcɊ9bon;bfs/ޗ V5͎RK:*t楖tQ?)Cd X*6?|_>^^l<|B~,p[$ȯK`ԂdYdP??[1Z+H9}/YbcZGɩңQ^碍DzE[%ҊU'1kjo&\1 Q52_ 㗻]cx_s̽V Ψ,:3DA$yGm,:`Ww=C#wVׇww7:]=Ͻdg))ƨҝoA1ҨkBqr_Hy-T|+o@a(ƶ5* W3B3~v&8FV5 I#\5<-+8z9k $LjRfC89B ;tj^ A’>I(¸$Wͣ7P~iNpKi~DW1%vw+bhC5'FDt oU'$F&b(7"QP"P|l1`Br:]–`hݔ2 soEa)_!+" |kdĄ"& ZeDr͉yr&zV6q#ҧLw]Ogg+Unr_v\xKBv ei$p0N^$pt40)䴴5JtEݝ{L1tRO/;U$<ʼn0.2܁ţյ]؄Mf9qL9$@~w\hd00]״!#RV},ZMYEj-F4$9";C(=9֣K z4|B4c-~r1h]K#Ɖ TvTߖ+$gV сZ&V9eM,*+SFz T#s)Z֌zyվD/qkW*qy.E_^J|hmhmZ&%!il e 4ItK5>?1V7 Z F$حf{Q`2½F$Bn0E"#ujSSN ;wG=e)U򋁭& Th]  |Q]Ws!ݚBΈ&BvN:!~Ɯ^ӐU#n?x)ws_`̜愎jw(~l|+Oͥ;A4$\WK8\_N8ir: yF)Z uf*w[UZ% .K]|Ko~X+U$LA5}ijNv uYx~8c%ebѪ3&}>*LS514= = 1$Q\uN C볦(baPRݮ =R[bi.J@d܁!$s+P )1 G߿t3upXSNFskiU6;CR~I=.!Uhm1k]sP39AYkWcJigWn_v[P!43zn>.W&$lJ 7sè)u>e%%bB>DᳫP~yE]6$ 5Z .0AۨNEpq! /=AO|Fy964f>$0I3sZ\c AhKCm49[dm6;mW b"}ZY}#H|) ',ߘu7[W),o^̶9SeٽOFVTN2Rc8R9A”g Ke,V2p;z7M)~zhװz'A,(>)>Ш" W:j-׊`A2 0F3Ɖ0RS(pLlR&ND}qxY6ν#Gb@~}uudlר:3;}879gJ4šR8]ֱ2;4n#DB,ИcoR"'ieHoNtk^rb$c  üEZ3=SP*{d c48uc=&/Sty@ nK6ld NG캔 (t < -uz n 6ypG>ttM ;)inzv롬9)Ujmw;9#?ԃO?Cw;y#cG uZwdԻGNچ^qp/TLMwR2^ѓ &2cZ7:u^2m+tţֽ'GN'| U8gq:d4=#GQ!1|HmcӦ$8 Q_';YAמQzNrhc#*0>T[FBCQ>QtRQQdpRw N9yo]~St%Qa0ԗG q.q Ei8kݧ3VТ"9J(wK\]wթqgr ZzK7(*pھ>[ixrFzKwmK8?4Vhݵ>$q2YЉRiw@Ǹ ~g;l%4fԖ|v&ȕjJQ/Hd΄^0pS"T2U= qN峾>:ôᩤs<+FLVi禙2B ӊsf; 3/H8i&2/"Dˠ @2';'|1@jE4ƹ.)422%7P4&P]F)=),sz?z*:@IZJE DuD5ڤ VG F=~@&Qo:ox:vgiݼ}yqfKTVyek|bYҘyZ+~t5>z$=#N~K3_v߽ ҝnFh Ry.L+:F"X29 `# v/J@0َJ!~d>n!~gnꃬGŮ^|E7ȼy祻Wɾ(_Pq"Uv;b@>xG ܇|8D/0e'$ L_"1 #  z_́>ӤFWALB2w$MeRPhe@Wk CLkؼ̠;aXbe=,Q!a5(rZG$P!ĈK* 4B4-_ia]uu&,\Jz'pUGB?mf%XC FDJBDS TҒ=oۘ]c>_kNb|\9C74e|59&5‘Fp.RmSƸ1 rͨ R_6fTLF(J2.8088UXFg DMF:E"11W6(CF:-0= 9h(D`:$e89dQ4{_b1# V5*jbhvץ5oCͧZT",nTÁ5T % 3LpnxO™?%z"< Q"  Bh4r@ X/TT!拑{vDTFZ3é{RKSJ1NKqJw8:g<;f|CXDlJ}f$泥t1wΘLNMlqGz!,n%6wAIMK(>$K<)I#s R' эckKM@GL)%FwȡDFp"cYotI`4%S>!zL$%p*/OG$/JnSr {z_%A;B0sIDOBXC FMr-fApg[hҦ\\8TDGm.\@Aq0e6r45l* I2&B7f+qnI͍]Fv*^庼.0E?>S=W),o^̶9S.a۽Od `<#v!TdOpI؄''(Ud*]G|^]jo1}jVX~4{L ̞[b=&Fdor,qڀ03{$E݊)!||x?#5UX$U&!FNd٬t"*ƌ5 +ay&XV񔮖 mMmMDR0L3fXɎ>6، QcJ++Vt.` wT88:f,\C"`F/ R[;@*TRnW&|2x뀩8s:>p)E&o捻[.);F6:nٌZhw |Ar<&B8ͬ?2B\|5X..>ĉSunMA{|R:܌jMm4 .D;->nHS4Ғ 7n' sB;tpah*' I4\| ,F%5)b`Oop*}]1\Uc!*d2|> n:^|9z-}\kwt t~/_Ϫ]7bݵ .%Ы]/m_P :~Y~1E`?v97ܙ+I2VG$5Z`f`Lcj .t c7ILVPרH=)X4}QM7c:Ei4"y̾$A(A1%?b9kcH`+[{TD D]i;>5[AJrmbG-b_cKh::-Ű>e_h.spz#p : tYӚ(+0xmL{Kx<="<ĝV(bF_E1 QHyܯ\Q-藘f^@ 3 &PYxKFy鞺{¿31X'JH|: ݬ?mt_>u78iyaCɧ; "|V{ߋxWG \Z93]eypWH_QYs5^u9b:Y_侻^ưh\ǜA3f^s-^pİg5uL->y1ZO?7_m ؍i_o7'?_O]s ҚnJM̦G7 g\`=׌Tb#+i55'f? }=˔.7cw3*<8YtN*fz(<fPc!HziׇbE2wJ0O$f Zj=ÈM7ZcƉ =`rUzõ(\Zh$'gOL5"=0A{Yʮkmtz&?)=`S]Z텠XZS1WוVNTySz(¼(  T o63x)=`^" 3/ Aъq+EWpnpŖPy(/̼c YؾVL[Q)O l8Qۀ~0/Y`0*fj#Jho`Vsxx)=`^sv &lekXARH XUhݙu>$Ti[,q 7ajPZD*5I0B3<)=`ÊZyF*&Ѕ\nu k<)=akY˔4,ɐ 2â\X9 YSz(Zڂ ֹl|8;j=۳)! wQ+cndz'?owTٗ:ta*XZn67nOWAoCMզ [)Fܼ4㠄0[0p `]ͨI.;c!);u&B Qv^wk셻ٛ{c}>ޢf+^dMFt`SM3AŰKHD[:[#wgwwU>]"vMF )݈ h?77o`~&l2( 7W48Kd zJ' Ӛg|ڶfVvz u?`]|SϓV v4OJh?1e'6OY9.;VcLZ%/Ïw$y;l~ş? Omcލ+Ї6v=i)@1햅cLr.Q{׿f4LFɘ"i9Iu0& /2Ivy0v)X|gس[k_0yVi{ʧyq2A{RWHz@s)UaHȪN=ޯdlFq|Ziăs)#-ܵSTPMd[-'['reETe4ee4QuzWB>ZPm hZcn4|1\F<B]F:q3q*FrgzYIۻZpnl)La۩䒇փϿd*yJP7s;sjr\X`o9KD$Oumq2ƘdFyrXGAzP*9r轌l9)o]!s ea!Jfhe Gؐe lꔝoNhex"*e 8߽daIh+YI^FHMP ! AnBkşF:Nl-FDt&jъvjU,[Q3欑oSp on7s~w,5w)i\{oLϖtW<oLurxޡ+9+ _6BM75B4`kj*SbjU`\,r)Ў2oД8ޞX9eT|Xp:2eLԼxQ$ Dl9ZK4rFJ[kFqå<1yxAĨ ZQخtf]wsvp\'(vcLŃ=WmNai@P].Լ'B!<$JQ  4;d쐽̎fpQZdfPGZf;: J1_B!5?bp1έK&^lL/5+`fu쀽BPqC2h~Q|tɸ+Z4a: PF$Jv%WK,vFkF$wqO=CI%!92w2dWYχTxz+irݪ wzpTfǟu4bG!O-ζF?(#2QYQuW#2`>0&ӡ$֗RY>,flSnj(fI;s{CW.x,,no^ƈp&o.^Wögc) [XxXN'J8T;֓n?yI1X>R+Afŧ됁`ȵU)޼ˆ(k+!fT#T]h#dD떋AƺL}[lFS[ U4KtG؝j j.R5)~Fü85A8&u4\z& kD*b &RŒ9Iښ Txjj(oA—;Fl)T ̏OJhY:muj-us f4պА\E딈ro%|QY\$zclktӨYw u؞U`#2~{XVqWNϓW L6ez`J'Ƙ2 twl<x$aս?nN5lR lג:\k#eOR=ÆNj3\eMU;0%cOyv2:qX9V؏<-ŎÆ{kFşն(>(%8b# @-ta{{ [VfMm:d&j;?@NrW}=DtޯJAFCA!nR_2"YMhz@$m*6IE|[F5{k=GNc'ykG J;c~wkI յ1c1; f Vi󨖓yтpKIWPql@fLAF=aDcS9F`pNNS"kZE$,Skx heOқilI0eLP%IL夒ҰJ+'$M8l?C! X#1A`#}Y\0t!(1kGuvy{7I`-” _w>Q; %@i4y\ǻ[VEkӨ`4`8P:{}〰x V~z,{ȍ0/bb ̜ 6H6yJ0`7medIZ3?E,Vf}DXfU}dX‰!aZ%@2ͤӺ%鼜^|Uf{bS.7o\\o1"X^+0Oo.7tg%t|uLX(S~^'tV53տ9 k~ͬ~e~뿝pLe?%'cq[D3 0Q1D.?OW%yK {8ɫlK $1JZ1~ғm㑲Uvۧ?}bQ|び]OO駏O1b]͑h|d \8,7f9Ywώ݃;]k?9MwCWΠhn(5&-3V9CG#!ohyŔ/18zz@{%=UVbWD"GD !!p( J4<39zgzfh+4}HOvXBZp-@B*:c߿#v;5I-'u &L׋IY?v S`)ԬifI3.6\ 0&.%aH2"KDqxG}vzqcIOhry=$T,!D(2)Y`0"nƺL2TI+-JeƤMa5G)elAAG5EMBrDyNH'5fkZ3l*9EVТHSS\XI7*s5XZ+6dz>հ9'|%",T%k;d#3I{4)ҤrKpM{R|iUM(29@]`oli1wE1fuQA\b {YL9$`?f3bfpU.@xQ0;~0ԜzD'ĎެWUt:ޏpzimս3j} /~2/~z|}-j_"AN*B2KB0?}b 1v&ȟߢ#&?ED7ǹ;ۣz>!3w\u w&7g\,?]lʛ}rWΪVXEۦ}hN Rw]TV(ӂ[*iyZfwUWQHPq *CE! !7GA(it]N !Ư߽{ce2+sm6"#Ǖ y"!v'Ծ?l j0K3s)s͆8 ;dֆkRPYr4yZhQHNN?t*Dh ]3}~8:@1/'fy9jSM c X`J`)J)ۙ;Ȉt8Q_k *VjqOQ;y 1`VWI5 0@10 J-ꈽ>p? ǙkGkc Ll*ꒊKŢ\Zai闅cHg -{e DckJ@dBc^d< z#:0nKl|$O; a3.v!.vZQx-?k$L¬* Eq^l\h) ɊC3c{Xy_%f0njR/5,x c$ s$5"M *DѱfoMh]G !Mc/i1Ͷw_v(mb.jpR0B@0ae$B߲ߟZ8|Xeerx[,B ]9Hd`sL",K 3MMaGC0T=@ۃ_P`z-aE(88c-rPH6v8D<#e =ν(Lx756*Phh@h׶:/Z:xXqx +6HkЭzLӌ'VsMI#XQJ #4;NٖيEWlࡕ-),1Ҝqj*td)q@1ި`n?Ģ$.Gtm={6d0pB ȯBJo,9u0ќ^f&˝LcV AQUzt863 5W?m#r,H;U22i\1G<;8Α0YcN`?|d(g5ٲ=@<'-3i /8k.øk1;2OYEi9,ͽcKʰ65pjQ*xXJaD9-6!%x~+^D^fH6\%hFFVE MVud6;^<$iQv ֦Mbi!"%˛C'_ބ ^ܻ /%_ Zؾ;4z`%F TTv0Mi9Yo] 6c0g?*1LµK|;0,'nIaNW^7ןz6MwC_;Kz7v xÂM1g`Wn*1Pa/ν~j_'?*Z\ң!*f˩jlf廭q\z{B&${B(Om"TCfM;QbP./ k,EK.r ZfQ.¸ޤ)Wr㎪ (Gz^Zq XD+*@/۲(EO'S$Yg9 If$>_WYC !/R?Neh$i2t=r]}bb u.;F{P]oh OѨ9tgy{ǼA;rΦYRIu>~3/6FG)mkנlж%: 93fֽ}Ē;X(dvIa2頇O%M?\D&)U<|08zwh+c/<%Ծ9уxDAˑcu%x/hΡt@ڒиy|G툆!X c$5a }C(9򺬌'I>;xO 9vng.lgwֹlv(ma>שL0i=AדNC'L_N@IQn塣o3ip0F)ۂmwO-|?&$<IH+R IIGI:Y,cيLc+mrÒ0[BԵ?q_fiIzA:x:`ߎtYA/NWjƊT$c 4IԉV`pi/7ޥǏ.5N؟_b]&n ],d]Ȁ_w!D,L2ҥؼ*FL4W"A&XLL|ߺ'~랠cLY٠pO1jsޛȥ?܃hvp:1{PpxºzPLʉwEj]W-wvz..a0a/^IAMbthz{C EZ2Twui}-;\ w/Ko41\Vrʳi<>{z_qc|yjX | MS+=R */o{n}>֎(;kE=o?3/o~:8"'& [p`̲>;P!v$q"  To(ʣD=ۖ(ep5j5EHEM>["'S؍@(*Eq^qj]/Bs -Px ܆gSQtAL.uHd>!k!Ъ  hy9>۩M^rW7+Z/3]<`]2zN7!C-H-J~6\A\ f#Tܫ5pf>CĄ@bJ6uet2 P6P)yê~y))|ܲn a9uKŠuoknŢѧVdC[U3uK6`h*g5R7&ULQ JHMBX(afykY1f3nmBFN7l_k&QZwN(@*uijMzTzdC$9))Y90ln I6uKŠuRhn} iݙ/Dn9))\9 Nr,B)a9":j3,`n?(1^{o' lNm@2X a8PBSETv^JEZ&e&a]GQ $+Cw*``0:xbjR(G ȑCٮygBHxF0u#Zc'(q@)i_k4-BQ'AP 6hBxZ# 4FV/k- ,A%*aF(< Ҕ7ke}EQP#:i ӈɵg4֌ʆFxI6;l g^A1Bsha@ o(`4`L@%1Dx*Jc"XDvTpxכHAQ!?R8.+/S|Q:Nu%Πq:ŜU t2) q9o@ ϙؑ D-7@Q_`= SP _ɖ=Ӆr 3+08MAJ!@N!'%0D+{Q1 ty-ĠxvBGeOc}A]U {B% $a#3V#Ϡ*;prb. UJI$Xhț" ]X ƪ= &J? .K BI9HlֲϵFZȑXFFeSVkW]k!OIbhwMA5JŠuRhnRDv8ʘlh]u'NNi?nvA["XF#um4Epjhmu'N$NᏐgR1C2[ѧknɆU yMRIU7E1AꤖD>ݭO6`hinԡNjMԭUѧUVdC[`N6`hڮp߮I5qT P'&ߪL;K6`h8JmuSA;KŠuRhnގ>eɆU yMA|;vՍc5o*&h4g.@ݝlh9 yM9n2ZZ:(FgkLawK6Z)ĩwpG ʥL1'(3o(Tw+ڙip'! T*F;fA"qgZ3[w#l3T _qxyd&M/Z3[UWOmP Jop2 /6݃,b6N!Tp> W%mo -3*VpkCH Nntrb~ )vWJJFq 9R/cڲ\&%#M6})氀W2s|=>wIif yem\5r3Z!Hqʪdy~یs ZYZfn 鹛_]gs7TiaVěbZ|`Kʏoۍ%[wjƣΦ]HX,h:?W׫O?]y)4QƸ N }#=g lTƻch%t9N)ߙIla욝uX8\{°\NY}M2.&znzYЏJFYXune<WsoH$J+?f)<≣7az\`wj<(3ߕv2 )##\LBj3oBޠL3/?u%4ڼ1gU?-" u./;#J+o,][ !^b5j$-x -a^"bd$oß˹'(~Gyy_7'ԁwgNST0N7E'APTE@ڙ5eDısi0F"4;(<#/&P)Mqau0BP |0{VBI5db+J+;+Z+Z#_eg !=ℱ]'_s[ 9.9B-7R9V…"t0|7E{xw/ ja#bes<ϛOzT_nt4ADB~FNo>feR/Gx/Td OD'`r Xnz}y9BjGW'#əֈ!9]7*#LM $Roz6GE?3Wbb"bl Zn.??V^EߴT8='aL[·k7 art8axʋ>ycz#!D>Z5/ _\ex;м[#y)J"L Lar {ѯ,Ng'2I z2\qϽ [ZώF?feW>\6?bhWS\'Veys?ݹWsC%G@X mף\^ܮ@-"Gnz#6z,^AA!@n.œA5 :(8Z$yr0@ֲrdjic Ј|Baֆ'{} BZ~5cʚ㺕_a%K!tv{TRɫS.ld*=4 ,/3z4!Gޜ7~+nS00ܫ],nurԎtSU$Q3y ƯK~Ɯ{8a_L7 Stӷ$wZLϘ;p~ns&jc0ONUb=; f^R Ϙ=byN{V Lݎ?9E89ŧl n07- 89i7ydLHoz}<{:|'vٷIoSհzy*"y.X-`kxj=rK^smKBfe 3Gˠ8gJ\[)i Vf-k[4hXD9g>9ȳe"3Kԓ- C_+PnG*㪏TMKʹWglN\b@y (jN,prfUh`+BRaNE7(c0,2#x@)?o_;{[LgMs/1ˍS`(eWۇt_Fj- Qoy[T{;]!~Ň>ӢdCvߊI~9)y)/Y5%(Sr! 9^ ٝ wfAY1nHͶ!+x7s,=V.fM}/^}6|n#UNH;8\M_þ|&͝ʨ)w{} f5yQd螻=Ͳ!"ֈtcc'xL&/~"S89O9XWDbrj9UJzhAB裊N|hԱe r)>Sj^i_ʸĦ$&S.wrIt96,5idMr=E,}4NnU/3&]%d~ۤhiDQp&Mܩy?Q] tZߤ4 x}2oFߤ=N"6jSqMZ+Wg @>-=u=@JO"vרg2fP`iO}7@yyC<^.*a'z~K4x%`S{=\8JY3*^}G/2*NX+scB+f7QS/'j@~ .>as:}(6R<^W͛?;sKx{9SiF%+5mYiU*`6x@ۙ[̹]co0؈E9$ C%>755Wo|Z: bqVwz{ &S#e;9ۗžk;;B>5~' g\GRz}ޜ-'Fd}n8zw5BߜM>ruwx};u&۟6pPd_O[.M6IJ͒75+toݛ 15&n_E[afVlE=*Siq)#SXno 6vKl9RE|%-ـ(`fJN &k #V!g_b4m^; ?½˝sjA˛[ /'~f mxfxmz"ZeNq2`xu{.c"/Ї8z5Z\A9\{4,v}X>v6z~Sz'_>Ja"F|a*Y~ @^@JfI֛Jەy8Yo,}+b$G଑n'&G%PrӤjR"0=h[ދztAjk[Zر`~uzVߴž֕l+TQ 4$]b A,(Z5oa؎3bh9t2QsB0-σRlÞ)~#;FC.ʎ1 RK2D6b]uKϋ9eHFcWJ5 &St>v2), 9VsL{g=P:͑< Hh5m-V1n#V1uP= } gecm3%<4Y0heM茘^eIm}pvFk0-ѕ@Z]ngȴ8Yƺmwy_1MC4NTdwnQG+w~~i,riDG_Bj#oLo}R ?HIg6?^H̷jAerfYyyY%pMZ.PiCc h> zGo6th f՝-l;88Kէh*<o?_z9ZyDf#IIB}34wLM֫ I}xslJi<7&*7D+;=%A)GS)prJ37mW(jEkd~ oq)/ݍkrʦq qD6!ڈ us,G9z$c![3#̽]mI!Pgm Įxe^W][|^8!7jYƽNb'5Żj8 z؈zL$,G嫅Ȏkxi0dv}MGwɳL׮yvE cS뎮xAE'sDh;k4xXbvL7[.M񮟾i7M54śԦx0o挎v"rxXf4Oɳ#X;o .;錡(7wݜn9a"xw74V68#VU!S(*})ޔ7jAىOvwȬ W9άpx{9$X%ԯQ+s ϷRRrhiٻ8#\]0;#F/1~L(Y؁]]米ڀeJ_WUW׃ dxwLI#3d2lr.QHPm =m/l1>u`-<3 ӡ2 Ɠ(ȞdF7j-EtkhiLi-X!R{hD-t.Efy!zTs;,uY&' ˀ(37I$I ":D{JuZ=&*XβĽ`$-Â)f/EJe~|_r,TeK@Gt>vҳ Y 4'3 b0%u\:PM[cwPp#53(PBFX M"깠5ő_:abxM?=Tʴ"e YDڑd9EA)N>2n%ړ~@59oGN#&is%,D5 2=EKGcUAΜQo_.Ar^KedEk+L.}\og<Jl}m:Zu5:{p'Ae(򜡎P๹)W &ݳh#zMʟOçRPl^rjĹiT:ek>#T:6T3JORΙ@:?!8e%ƨ z3ҁTAa\[%.~JIoWBiPt8Ir2Z)yFtFVxhq lV6C12 T[m`A` `TP<2F5+gi@ Hi@:s#$CIZ8KT_nRMqH3t1#Dfiw|\{{WӉ+87'[ZmZ ۿ  t"*j*ݍ,{W,R,bΤ('?kyl*<,Wμj.Ts?]DζLac<ׯAa:!2r.8(5N| 6tN!\# ՙqЩ̝Wh Aǽ㤃]]Ys,M6m1}<|9GCGQF} ~!] dw W>dk#st&'E`Q3~SʋD*rz_oԻ߇OWFV|˞߯lL$ʤ˥AZ_euK_ Lu!vkE_.ޓUϥB{Ĥ L:g0Όu~&'MN^pwl|ugk|HIGn5qv| uJDێ',:aytVxB_ЯZ_edT`߯s/~==n21(yt\Ιš/MzhJ|%-"w,`;viTV1"?8 N;`F?~ 4߷zAX\̐>ڹĤ^ (,DugE:Cvɞ_Fo.YS\Kw%t(}R5o^ZmFPq(g#<%$%G0 ٜK;C/\ tc6{ jA*޽v10 ⻒V.Al4ƲC]5Dɚ@$b_h*ƔfV4Z 8hrܮ">=$ba +I2 `DzÄGx4rdBsZ{( K @;5K((#5kQ((Cy)e Z D?g2\-ɝ,}bﶗ5+끰iqceoC59(\b~%X%vc Ƨxs͛2#tׄоy3J>o'%3+Dç闩 d)e {ةW Vf8+r!YU^pt[5Fx% +1C5Ojs?|h&E]f<ͻos Y7b-EU2LVV>R+Hm'bBu[CXl'=J"*=B6[/aOO,ՀŪw X|KCWPnҒ nLf]=&sRN(c2;ITJ>&sReLfgX991Q2 ?h캾 }Z cwe_9Vnshp#[<56 T[oVCPtc$ = nПj,c㗛T')1=z7/VA΋.$cu[܆etq 9n .:.gb'oy76+eu7y+/7-r:FoLvN*oײӼ 5idY4ؼ1[\ꢑCt:z[L9vʃ Fv8dmx:xv!_9DaJ8k7hU,P A t~v;)5hmxOJ6Br)t۸yW7L x^b;Q(+Į%ӫcfG֐O_Doe9yZ}}ĽZڵƉ ?wLLĂv t#K:t]:XCoȦ~% Ң4O5*,ځ^e=EA&xC?T nAڟjkGK.7՝뀩CyPs=]R/gHRd}ԋ1Ct9zK\VP@g7hz_&&n;gVj1CtZnnX&1h{ofn1.H9.f¢".f|F-ڧst}x=jdm?Q~g-l`?Nu?Kq٨Y?nxcstE<j׎ ^u#|O,/bXZ匇-9js>^鿄-"i]j*'DˣQ^/İCK"hC1D;c S0-E*OBi}DĴn6J e<ҷ疯SXh2?^|[ۇ2dp c6ۤ7kD߹Y)PB0p!fw5ڛĺ c7aĮTObIÉV A[sWY\_e[}[O46ALB>ehMZ(g;9y5nk>uUJJP*XeOFR=N?˦lRcWֹV/UּakEm[N%VZ!uf.x9{MYV,9Oƀ(2z-`SJ|siZ<)n/S}IםN( ۹Ayc eQ3kǐ#:6Ҁ@g=Zn*eV#hj#C+qdhc-4`4>Υ"i8S72a=/3s ԍ ]R(,MY00 sT3Q}re%$QLF)qڊjQԙNS献ĠEm+sa.m cv8۟ƒw:]L|zUE\WUUv X1JNFp1ԟI7KbZ4V2ɦ[3!U_β}Jܻju{rI6]ݠi}R,qj1/LO ,UOpG,I$XihGJǁ3gz=`3t>0R ;"{DD&Frd88eI' ,DoVN(Q6cT6TC5QG2<[[XU63tPZ7cER皛gq e/7ֈғF: hi/Q}jg6Je.K] (P*+uiC3߉EJ5GbQԡ(}֥RLeщ2KnE{CeZ[mF[,i Cne+_!k/@sjuJ#ZQ ;kQٍp4P yJ,a2Dgf7`fM3MJe i?ٞS>OUn}}lz9M?*J9j=.`y ޘQɚ|D`YD'HLY90:%L sd s\X5L0@>zHM)nN0A`>{gNZxI_Q31\]:"G1Lz$:j$Y띕IEBS[=*Ɣz`ZUeXJ]eRzfJ8I\|N^8uIKQr\z>K!ѡ6 {(vSJIYV}͸v&uea'j3GWcPDk 1$Spibj=׹=,IX]|*yaõIK2FM$Q]mo7+p=Ki` $m%XR.b3#5!{gHtECL՛4b)M\Jb|z=0 πC#&]b9NW_%()kÍ⎀Pլ2*`Tývnr>5W}HJ24E_OE盻߫O]9ĔN9 M#V,FxUE}XVain}5qNO864#kl.Z7Շ=ƙڈkM:6vtE6n]Ȥ]1!m3٪Bm}oB],MzyXX^_E~Q)/E~H!Yd~AKL*> oH UF1ZRP*d(%qK_\^R {b./>L_0w|: xoS&Rјw{MyJ?Ce]敾ʬ)O-HQfIc3) q鸰4F ԵL6XV]`†PFybkMֻ(EhP`(6$3([im :jË́PSm:쬍LhWظ `l\I6n 6T&ēd 6r ZawBtݢU`'gFֈ5c V[&!24wCcHu[/XDJy<>nXP=z1H"-m \‚6MyKKާD!cF2x[YҋhKn})1CJΩgb<RuU 6oko.fmso)wo?ÏEE t/RXO,@yfRMFk'>H?I4'nUovJ{Eԟݪk<{y.__^b=ͥbLU] ~[|VdZV8d}Pc_uub6nWzxWӧ_jQ|x_bl߸鴿gn:]==1JiFfpIwb,I" !f.=vCי8Xゥ4^H+9鞓?'S Fk1^aE%PO+\џѯϨ,#ўe.[N^_?f2j0_oU\Rbq9&|v)`ep8Gfxq^b t{w<43&,UܒͲM-Sq[ x0t }w-U2e#Y+7!6EB̻w*6c(*;w^ ưWnCl=&X#gM9@doyOnsMcSRQx>{ܶa6׈U4xF46>t5jW oB1M4@f(@,lQIsn<[ Qj5͠>L00xq&5@I񐢠6dg>0DFS mzGnTLb\Lfrs +)EßBh-yc tBo$ uwߧB4c7`h\ >#ڨֱ֗j%zo4>VRbHrR ]u;JEiN}NLe Q-PS>dE/OĦk!v "_~]@yqpnߊQ4%ѻ&],%}AEI1aQ&D%'bC8XQS2 O @E ޷`$S8R4"B QH 10/aG<=KoYWt".!/ Q 71Nhצ6M2hСC5dey{1̈'5TK8|O\ms'ooRG^ˋD4azjвx0৻˷o0 K4oWRYw!{)vK\} Un]>7A3wS[߼g!UօɼJ-Pv?:DaWѦ?}]Zkjd+Z^EN_T*.izߟCAU$ BXѰ++!SRNԬ=ho^sy&ԬW쵡"hO0G50NTA_CrԴ,@^?K _J 1QSiHqyC|Ip-]kyH%q|4adJ833l\o9iNrZB;`hIj5ֱBpݮbYPδ* SRlК93yLtV0[sZ+ bCȯW!"v'(ti+񭟺&*J^?{}e_ժ됩2T h_j.!V 5&ڨ [ZtyE4d<4nsW>La';T(BG{l/bD18}[P'`M5OTBəs4/hY}dS9\رϨVziF2A}*F:'g y&:Ȧ;}ww*6cuڪ>w^mưWnltsfqA:oxtOr GWnm vlì[ 3ްuvPfEBx~L O6r*4%[KB[8˦A255~ *q /HӲE%ZJk+U?R0E-fWKJiuLY@F gp@9?Hx)jVxFB¥Pl>2b) Gl!X8jwli*àHU( HI!Վo9^KՉ> ǃ]qqgH+ϺTpq(C6f3 0y'IQM\c*WKӷ\s90f,FÑ&D^J~Wlo:xmi#*pU6‘Tw?=ky眯ls91e5r fjnp:/~j3r跏ӷ(K; 4ohuM簶$c*AX:B mEb4!S(3vpxX8;{-՛S !$)fx&|RҘoRGjq\xJD^uq>@JF!ߨ+ҭ+!9>;P 9cLȂԠsl5Ƨ1`9ngdZܸ/KnhэuUΩ8.K5^[\*v~> g_h_RV,%0Ș%g{8˶Q/G;i 9 #k@ V@$s̡5UpHI(wd "b0-iѪ<&ث@7Y"t^?YuاsyB 0Yk$[O* =|訛DH9HFvKcK ~Ofs"-WJ0TTA Y#*^eJeY[rTZu[\x#ićC$/0!bub7GOϚWo~||.)O헏߬@`V 0|o,(bۿ}q3swZ%BCFO-yνQBޔ˯֑#sƍ$@mvGsPH!wc߇{ާ($mf[buKR߆:5`qxGM; ;7E LYip2LF \D(+E*TeQJ5NjcްQ .νw*K4g;CN"~gǑvSl#X 1Ϙ~{bVoK'+nyڹ_f=<] `=ؓd?o"1vL`|[sO9n}~h>hKtv"&iT`OHlaS}C&~u!JA΅SM7BT+W+l+Eg7KQ}MNJ/J ⬔vj$VzVj"I+@MޖR߄smr?4\g"+SѮ;p$>]DbeM'd@R0a;.~=aת jtMEJ0Vr,lU4 Wp}ze 6zUGtϓE$STR/H 9cK10BǛJXP0+]|)jgY%Τ㦖cz3]8M хo?"Egn`q٭0)?ȏf Bpm VZ6ocQهu^{40_i;owd \U\hGZw ֥4ӍߴJk@NZ̾ |j*gݽk6rz5[ՆEhޘlǜ-EMwM[# \h[-G5ʔwKf]<ޜ (_(CzbEӫ1, [EE <}0J6/dBr NLוmĽܟٗ>Tn%*ռYx6'(E}񒴅Ly1ۛD_̮]b6E޳Xvnkvj#~ўn{o ]ٺtQQx_ IgwkC?4[P,} B㡭͟=%;Wͽ}hɟUW${='?nVvl'Pz;GrB>qͲ)sǿ]f:M'n211RۄGzN:ϩ[~jwB>qͲ)u;@-W1;FwѼ[~)`wB>qoS/>~n6G0OT5&Q( l!~u2bS'*%Ћ؀]RAnT8~,4 RAd^gQ5}յ-܅WMGѶ;ʷ?~QR_;M7H}5V#G%DFO,H1JG%J4詅S+x̅ w9z0F_̆A&9l_g`E1܉ADClLƳHT7M\r7( J).?uH\tw?gx3jϕDE(nu*BOј:.JQx 2̘I?Et`:~.pHԫRuCZOߙSpĝI?ajOꗼ#H̙Ixz a cjQ5wLY8D+avB5mڱfب2a~@2"0D!U %CJa.Rژ,]iTd%s,u*`%gMxƍ?+K#NdIkQI_[X/O^E|ƺÛmgUsؿ"32ˀ/\~ږIke1{02k[I[JVPPh wv"H!w0`L8F暬qV'AvB3~VL'!ޗ*XDd˙?3 \s =xxŸ'#9@oj<9hUrEzD$H~xn#BN];L^CN'IY݁  a79!ARhPakm$6 ,sB,{3zUViG,[֨QX-Ӻ䈥v4bѵAK4* F+i$+i~ GXus(lBp۠S Dr&}Ò()vaG?wH2{L^˰W2$pČdj>[?^}Ua ( SV $Ad^킗d)fZJ:xM8!n b`FsHAΫzbH;H$!,7fTJݐ1}] re9Fd/Sot|gg|`!fTT#8w*<+#ŻMx@F#zH6|&eS:ҦCАqc1o 92̛۝Pհvj嵆kX+qV 6,շT+jmqb 1<|i0#VzVI(Y@vj-m+ ͹4 @wܢ/E]x#_`HJXu?7+nQP11$oҊchi`.,QQ~HdR)}UR1`Th3LnC:ŧ\P36zR0"}>xe?Ͽ[YоBoMg7D?xմ2Gפsyʋ{Fz%e9؏U&Jʆm F%t#TA5ѱ%3dq`2)n/ߩ]ВuWꨗ)M W q.:b/sbL|9rK03Щlb9WO?r00:c_"^6o:o >_|u]\^PP`5!T)ø/2U Ykf~i۟ژX݄l; nփ1a}Ǽn 5كǟ7C% N=f]<+:P&8<ĝ"ׂ_Ѩxqqp,*NJAP94j 6<è%~g[nHWX|=΋$orYRD2HHiҦ}[`_= +%X=6D^˸ߛH=} ;)]?7.ɼDcc6(.j_A_]_sGE b%=gg!(M" 4&Iȝ /gg3;.h (?Բ`Ƽx m٧}]0LJ:~(aP ]}SRsSgX,\5(>_qdee9gB ZboqJ#yo k,QH)UJIH,x*28y]/C}ՄZևɟ4WtL) u\A-\{R;~V"E\}Ncޣ?Z<~3H/Cz RE-cU"4`u+j?zwG@p80}J'ů i4JM<ɤb RQ)I-OcMZp:%Nz[4աysv_9](!eܡd2ƨ¹ĕ Y;5[OY_UCܷuʹ5 A"fDQo@p`wIav;93DC*K,MS@kir dZ)E,UJH%r6*00?7kY1%Aؾ'YNVIɬ) *@\#SH%e*KZ`'F9Ѥlf-" ؇R[C4$Hu,\KK}u=8Z9Qn0NC !:잁a4a;Ĝ>Q|sY8aj}ny.fLd JҜjET*Ôќ lferAsJYg;=*]=StgAyɪbӓ|Y;Q$߿[sfN #B?+v)8*JomՏ7>p9e~Ư9ǷnS<+_޺7RG7L 7D!R=ܦw+cz8sJ'g>s ~E˻;Glz\g:ɄV >ޅ/"Er! 5[Slk?m@ ˺B7(yLrDCɠx4֫*zw?u-wD`Eeap¬i7@f4gzMklzT3q9pY&윩y.B3 ~tM/|ɊW-W)ѓ*/v_k)?̘;W ,C>e7mnJ78+\;Nkn:z-[κx&ڿ|_"y:\k]Ye^pbΙjRt9⻲jn/o۪re~Ӌ_<b߱{@SS^[zUM'$8gmː3ܥ{k$!_FɔGoi7v-!6m<`->:P!!_Fɔ8kz[, BD'mۘ|8_\D2~g-DNk~i@ ;'IU 8o"/}2WϠU따f9̃Z[ Z3#b"c0; ;Xu@5%#P@B{[9qgP JZa=fBr9ycsWUQFg?XpĘ t y  e]Ӈ`|:%&Eв@YGUQ7j`|>E;`xBubs'Php "!iy:?5jrcC^?ѡUS8yԉGv^gt⍏Qq~K1W$A!P_5vs! ҶRDԯv_ oAf)& _ݔ">F;_[ ˌNsatd3Jkn19A9'@!7!^3T y& Js !pVP2$rpےպ<(mF)G<|?{,v TI9 ;$ׄٺ1zzUWe.AU`R֊@P{9۠M>Y=vDz"MyrSo\y=.9 9 GE3t^$Zzvn0w.b#"`1?͌r⅛' $M-1JrC {hq]WMs߯Ӿ`[UsBJ[@-_{m9-A\ 9M)\*+ "3,U)j=hrMA045h^8C& )Zβ"y6WaŌp pd2uAH!Ӕ(kl #U$$m[D"1tLiysh;;EfŌD'Œ9e-H)$mf\BX.$}D3w|TdܗϺE% tLHNܡHT.M b"d;^ݯY y]} lܗ3^qsܗ3o:$hOB"znu}[SB!_^ЂXPU4Ȍ!KH님0ia/<"Fd ;b`CiEǡQht-2c)uL8g6gVB&9 )8SC)HvOݫ Aٵ_"5Hﲏ$ęտ_5D.,D8ݜIjҌq 4U`{Y50ͷkKrїܙ?>ο|͑I~T߭WNIEvr\-/p9ep?~|;uqX1zo\i"דϗ0 8NB6/[iÙ \s=9ӄ(…wwn-4'l=5JK(1.̧wɻcQ$+2U)Q+EQ- _Ҝ v?5K?xu߬Oryr^vr\<,uBbp erFBIT)=Mz;Y%YHdt)d k%QkГE 1тtVWA:) emVz3 }4!Z}^rP[tVԦ~+=RiYH-ȈUB*RcyBE.0M-qSh %|r|PZX!N#DԘ; D Z;jn>Mi鞦)&n>AR0TFwExqǃ8H7Ğo`hw5%ތ+0&i? c|s#M@q10>lݠװ[Z{,"f,U0"mw8L{k$pjBPҹbpv>r1L>=2IP;erM[h'ٜVTDpO28-/]aa4A HiF'1ͣ=*Ĉ7fbIOA6ԛtzbJOQ.nFYsa( U1HqPҒ(mv&ֵ@O"[rH}W (1|*k.تh*ThC(Br%{Xe)#Bav_z_M\>^_e.ooT- }lLN$V:-F!J( ,@;Sq Ҫye0DFU"*m{[A F&C3"JfrȝEe*2Zb*,]\%+UJ,e$XZWw׿T-??~7 }W5|Ign lVw?Gz>|~~52>?jGru-*5K_E~|{v%"۾SRJ̐ߞ] A9@t, fo+Cxz@AxvnD0>8Vض$F671lJx<=b17ĕ_V{#w;T%Y^7dӎx 93i@6xb#\T13U7zЪV8hڎz7qN:^q v:TKhnZN NҸ)N`ROۦmwz 9Fubb9N7N2霰Qڈꌌ3>}FDzfo0w_J9]YUEjU']q_\_NZVbu7uGv2kJ 55XvrѥI32nR\KDQ $H3MM߹`Cl,n4veTFvi~(&wVNdN5Y:({;ZnZjIgA0Zu|j%{tkG5>ˎ-٪+C>SjvUV\7t8/.>]eЋ8ItbP%mCdCwRNZKahsTSMBAIKQK9:s /,,OZzZJWiw]o)nN>xMj+3dA{Q+[߀Yx*%")Yqmdz abXXsz ',47WmLm4ҝ 4KҭPV uH]U.jZ|Ysi᧢J6I?Ib򹻹E%+uͧOKIP8_tM-y늈.>}k3֔oA0VջhX}nSF@)mܼYae6Xpq4 y 2/ٸՑmHkIFddw?eO&$#CZQMD圈}@DL)OsYi*/dPXB9q2(sll!SD!Z)*FGV%X#n:*bBqZ-ȑg!(VuҙRhzC[qzn~o87AĹ֠Ila Y h &XvKoȍ00M &^>cfWwLN)$c”r+7MG,mLasir ǤEţKpOzs׋9Ebݿz^4ӭE I4Ց`HЄ4pXuLx!3r2L3If|X[3"S:y] Ad=OQ,r %L%լOyǭ!YH ]UI Qյ>S%CR+P`,} U(ң,mKp"ˌ1#bn!ͭjCT4-%8 LT'-=n-D[U F2`\rLXM$ j;^ 9P#%ԝdV6daesZqeiy"؃ \z [h/`}Z28Mz:X&pu{Oewһ)YՅ2~ 8Qa4PVuv`ѣs.WpU(6s81)(': pdKTqqϭuI>0Eq:zUn!3Y @yM5J֘(B`YmKQ[dTKa 5yda(k,\{F@@@zF6C;7Da!K'g!@[眱d'dtJXRt"cNJszhX/eZ{w׿T??~.o^wE}}gzzeW`j\~!BϨ_ZMYU;~s"E` G|%O?=FfqwwuCmߩl?dd՛Qe9P,lv/n2TόgP5~Sd<94fV`cblxV()XE!ns]W.y{xjYhIr =^;xJet2[Ev1>$\\<`n+[ ¶ =8X%vR/n{cRޡG<z$&Z>b?Z%/V,,J!CɆ\f9eQ+NN(GqNS؄(6y_\$ x뗤1쯴 |*Nد\rhEV鰁JF pb}V'7x +~@b"F=-L&N )>-H2K~Kxgu1d"ERZ>BcJKA­)sH6hMY8:)\joB.lP# Pu> nm f=~jB^2`fKr 1UhἓAܖ:(/sYrST>`I8[LJ;J<6Yeh&IʹSiժbûJ(҉ܿT:~O;Ŷb[sb[>9X v_u .]i+FQ ^tTM( Ψ~^ж7y,{CJh_; X#Z7wȶQ(m{.{r߂ Ip5q'afuDy2 BIvUtMak\N,\=7K lsD(spVWm[9k_)[`T*#-bē`c Xx́5Pf}t"z.0zj4-p7a5s`PĽU/jM#.UH,m{{"뮇>}׽v- lnQsddFqH:dˑHJ^6vfYK[ؙu=yYGHݲuT=1$. $XdUwvKl2/$tL)kQFu!iC 4!0XVfIrgl evU ;/O,ci72f˧_S>cl-&ʎ> [Z!kr}ࢼ@c|a(5;\\`xսO5"qѴ铡 D !kM߹ vkMZP4;5 uǝ[Bj-uUib %SȘ9^:n5 \8ԗP7o`Ն4S?YŘeD~cZ,fUY4Zt1߻uN޺*{Tms'.Τ'sPoǗ;P(Dx) ݺXRF1sYL׌\O_br :\$z/\DdJ7"#j\N7hF~-ivCBp͒)Bn(Q EtrFp,۷v˟/\D;rNdߥiJASް&R15/סL4Pq; 5/סVD]pʥQ2r&@*σm9 c ?G[$,^?q0Ѹ_@5?>[ob{XF?s7Kz5H/!FH[ZٯNX ^q”Ft}pŭ0Z L &K YR}PGb--]BYLi)} ~orq•H6 9،~;rFpͤ; h]k>sגWk )[r^s\UA6 + N񍧞4lt|͝}KU` QzVhـK:'egq2Gj%hI%sEMĕz 褝ZpB$'䤁ޢ:Zž׾nDn7Ujrpv!S&F>8mՔ1u!9])ҴWhW/qx*GB3 f+Ա]mHPZmK"[Mw٤WA#>RUBR 7Z9-aTiq&Y$+)7we]ԃ/Dž؉D/=M9D /ߟR)_տM<`7gL"&?pf%;=3Dh fcߞ\1EPԌ 8\[tWPcS?9- mxUk Vbé)Jz2w/} Qgb&zy?7ȝ˒pn'gH܂R!@JAn|ewE0+wg=Mc5Z Z9lEygO&Oc>!u/EO#˻21uϾ:[.ޯP| g.˻umܿO碨̴3V{*%n+i+B C4A \ /綆J$N<#0;ox(=r;Dmdˆ&L(R"AL P(,JI%Vpf9uT> %Cx[-RI>HyKIX [s{VjOPĂJpGJ)&XF)+ߚ(ɹ.G,l*AvӅ(N(pݥ  Z;W ,^OSˡIXjV՜rA{M?ai3]$Hp+n-VDu]k6vYQri:*(!XS߀ͯn-R 0O#;!zu%Ev*8 T,V*ip^#ꂗJD^Z!u؇K+]EUH-`KiDM~'ui0 Sv,]HwѶ"R9On6ibSqx;zfڷdav;D w$9"/M!ZDێ[2d@iwO.+PQ (- _d:~qP5.;9˖VTv5&$a ;;kڑ@co췙-)9~[yD4@v}dowQi٫jLQèm`dv(|w @?{Pk6\FeS-^o$n/CBp͑)ÿ7p-)6m<+9}-LvCBp.SJ<܀Ƀr`uɠFyL)} u$G) 8 Oҫ( "h93@Qz'!A(WD~\Rm5b}wlh"tjMZjuL&m\@Ƭ#%0-58bƛ!KǨw.yI'΢x źٜIX P;kЖR2Z*0>! t *WB]e)$Q]ڒ\JVɹ.G,1/8ס܀ZS呧)ҥT  7v Tc#i_{1ـ1y |9+qw 0zw׊}sIdIo6TUo}B39E(~b$@](E)@qo͆`d"`1 >W *V XP(Jm†98q! [ U7 y*b=R ܣܔkɕs]`:Hr".A, PmˊzEi൯-%s%#V`K]Y/y.éDXGQV,]QAq,ΘtQn!k@ 8IKçDS8I MDA 8)nUހrZ~ZZɭcj`Ͷ1򟁄hVo l}[`|ڍ@+@͆JtQkHxUn>k4H% Fq+xaE@0R*v|㭬5AuȊ J 2PUGxă廒o:SN8Hh/ IWvaR?Mw~,G7ɂ˧' Ɠ ~yx=6:IuŖ*2LUW<褸TU N=wuFYƍBWqL+ty?IXu}9H@ȡ zIb|VY 6p<:kuIsDq|GCkj?rD[Tv:*8clAڕBNrK+w'brZy2i备S 0H+(SK{Zy"܎8JIsqz\Z kZ9,49'§ å_~`tj}w,ie9 +8U,MQjN{ QA_M+a[Z9 O '%ƴVDE9Dp +\2H1ς^)G]e TFX c}rim03|YqZOMn=pw>*\",g_ĚǙ=1[|~W䔄鹒Z}љ8^ɼ>Q~`Ct(kU_dL|&"yFi}x]=`<7/=<^ׁ3gC ^SuߖȢ$} <&bQʓVDfؘtQqܾA#85rDKI<{]`{/ @Hf=%PG7nǞjugΝ){3K/~v5ܺ^Q@1tu}]eWrV= D_‡Whfat$|檯"}hoc A/D"/Ĕ)T\jIR4ۓKIgP?x5EŮ+2{𖊻HPNx)VƖ@$J*ib.9$V@]5dXI0$Vd/$(>+[fܷ# 1[EP Udj'@Br)Ưn &0[8WFXIH*_8XJ86ip N+\%7U 'ZJe7ެ(]RQ/.*UdxԍGRCd?'t j|} ` QBތ7sjv"tYi ѕ0B/t.[wAŴ=CͅsX~]']Hpmu[tt+5M<}Dz5Rb T\@bx|n9ʞIX(N<mi]UIYb[zn4ҀsW- qj2r+FEY3H6/̎ {/ӆS-[VIm{ fKbUR,f`rfk=m+ETĄ cYYѬ;CểX ?z _E$G)@.6Pc^Î3ɥ6L*SVLjP}v˚ق`-`B De}} /ac(<+ M 'v 7.nh2ʋF`IY3X{Ub\:+D&v dRV*^Nueu&F;ω{iۛw׻|*\^~4'2HsTLl7o}P 0w\WWwCwu:j]Whs7Ǔ}[;TP6*yN*S _9&h'Zކ ]p-yF®EJSBx,zh80sh?a \q[Ģ-x R1w~j]Jbh=RǛjz 7=ՅQh>"58:V<1HWsPЬ'#]ҫsoѸvQ"V%@Pf"HsyaEQLҖB"Y1ohƁR.B ٠)M%,[a7P1"1\J)Kh$. !p{$񐛒|y Jhu]+flb4*FϮtvڬ+)ܒ^8a?I8B \7.JVŊ*!h[kx FH-iYecuJFzbW 0jG[A7PP`/5ck4H⨓ Nt2+% 0”Jh Q4(E4 *"7X(\(;8k byŇ0IhOV(._Z  Z.UEIFpE'ѤM`sfqz|I=K2#y; ELwg]N %0kU 1QNFE G'FKP#Ror, ЌDJʝݍ 渳`=Ǎ VH/(ASxMgLѫ)1E# ,pA+<J$)+ VzM^KƳZ&@y ZNZPT>tCmIJB,䵢M( bOp׍C.y& qG5KUH(U[qR2FZZ\m!WT ηE`o}=H Wpzxr PQ5m Ģj\NWq~RÌ׎[B#ƫ]i?=:71HV9-ΰύU_S4zne jS5φ/`9 sHςDJɆPjiʽm-!1^v¬_-{h ]+F ';ϸ%rf{"^:#m >QvB4@xY߇jEk45o- ! 8:RVDKe"svr՝jA1j z)ebKPT>Ĝ1f*k[H8E3`5xbdҺ;Gst*Ւ4 >fEϗFh?w|>f0ЂGG[]@;ęǿ뻋0s6zcWfಟ;+σ8?#`ֈZ>%k><_<ͽғ4+pIW.dN݀7t>[O D;ZixΌOhv@Br%SO ahNwnzfDwm EtE +T|)z~K{ ?D¢LaJ+ 1b @Vf GTU:~a&Enn+KZ+jy蓡?vh; PI@d¹=XBao3J H^>* kBrM/ǀmTGm-A$p:^ hw/?=:71P hپlD@4T= 01ڛ;<,}"ƨFs~ Ք@4G'w_ J<}ɽTI>Y=|"KִFO? #]ϾԙF [ڬ8|"GWjk7E!hNwn33Ox%j!$+2$QVԊ\ ۊL,&dAs#?u ,acxOxP jv~lZ{N+\5Hg̪=YV5[K{e!o͋ۈj'7p+hdJgeO/Fz!~Ϯ.kGÞ%HbI-}nb}Y=<ajSLb[-DbJo7< ,f,xHS^)J [SdS0T$z{+\ \ydJ=:w;f׍ 'Tf\w~qWAt1;gH]h0wI |Jgmod.Uҏ5>Y\UNϪƖ !).g(^Bzfp;8tVzEl$=ځ?ebÛ+3^g<Q3"J+C\SAhSmy}sh6J^ݮUDM(Ew7x#4aFsQZKfC Y|Å_"%1(4;W .HTT6$&ͤ&\qofzi9&a8d y%82`)Bvԋ50Iwqi#qDlcjJԋH ӏK{eBj?.nHR RփF (TlmƉ &dsNr7cJ)3-=G Au3ZU gA}rrAe%W-ZȖ3bDE6oPCb#dOQ4|WU78x byP7rx ,Y16+SS"&HQ[Ha09(t(\C բjQ/Sӈz[˴ȬL]Pež$M[ |KVo ό6] _H٧z}ڌ~[s>g_mJ6t$/^2= a gxݧ >AV k&Bfc>Ėv&A;w=oz{o8}]K(R7٨K&yB g|HJ+, #ս.dj_fLJY塉!OQ}JJI鋖RʤT Gҧ>YZI&/[J(R)a!OQ}J뛔`)p}>r|%P/.=PؼVP6hbs ˵x3yTFI {W7^OG'ܚu`FbIm֔FRrk_RrKb|6Yt9]GZbW"hZ +]ey xGei=rtgs b].9U'\56Gx 4,o$eC ~^9H.9kƅ|9 ϵw9s{޼wVȤd҉Bx).JNI m$ޫ |VJ )%長*& 40eBmɀ(K,06?7O;R r"MRChKsʼ>9$Ei(FOY$yIm~2EiU9 6RJuEeOK8-q- ;눎$Ӌ$2ոHb,z4qN I8*Q ObuIblVFhx ;'V Jw{v&‷SY8&6(Iuە5 +q+PH0LQMԆ) '"UBSi spgZ$Ǎ2/Ab"Y\dn I|Fθ7w{HuόZDI=gw;bY+{1 ;S*yUs;>Lw;cx\h7%aݢZ~ViV!JkCVzVq1C_+L%%m*EeawIR CDB7΀iT Y M5(kזiV*t]/,YXiVZS&W+`+UYHg UbE6SÔnQ*aXovv, T*0 @]`@"veӕ}stp҇BM 4ύh-|p إ Y4UEE!NQ/#Ǝ)~Z~+Li*"U Q @hS)8h/[Ehi)[22! n߀ %#iҔ|@kіXⅭ#4`oeu?j_~_IeCېHq7KJH lRbm*AyBd:dF!K%ޡKo92#1@HuHA(=F R: ;΀\oa !$DL@i3 뵏`[S3[SJg@Er8MGD/QwKH+7g"X ݝ5t\P^-i]H ڎaoT+U]JٸO"GPYr OSt<oy,5,ͻ"]|%DF#%>cg;K.ȁlR\p}bݏXbx-NM5Jv]J>c iscr-r- c|8\A߈`t4 /i_Qe6ތĞ ׭(P\-OQ[a;a7d̝ӓkq7`thĉ%3[8<4O85aeCbM>;B +mܓLj\6J'O:]wK.>el(![YҭwB.Dd'=6@)H'jUĒ!5su^mCz:`M8հ=o}2?1J Ǜ?>._#U:[Xi\DLęs?xr +rB X͆u*8h9^E~uakEn{}~Ӿˆ*WYwnY6췽m9x\N7xӅ2QN hMGM7.O2ed:NEH-t-.ջ `!߹lJ&TWig͇Sް\]%E#Ot5k%9NRAKQG'4+Vͻث^b`E\Mӌǂowy2h1>O-vn.U\g 9M~j&"(Z+!r"Y ҝ**igDSdHps7 &0u:%NizWKtMcXγG%'!XD<#oŢ,\=Vwůw D #{F- %;XkQQ\,5dlHEYYVEUyki6 R>g| JKB'ýꨗB~ѺYhzq)m6)lV3E)6+G D z["hJ0M"q=hiUnIb"h GZ*uS}ۦZ5yHn5in9%DMT" .HkN貤:ONP ʘg*AtxUl{;H"̘HO8?|ώ `fX }0- `Fo /4׸nLz85xE 4+vҥpP* vnBxfk}5$Y“lkv*Y3Α"_!Djw 87OUOe"2a+$G mLVDKb+Rްb ?ĕB1rcmS}M/:f:JY]{E 2f+5cW+h+;;a%1]VRX\*!2(g*1,Vɂ1U6Ө2*9X[VMbW\ _[BYPu-=շmQu%[+E_99+=Lmj$^4+eժ,4+F"VzVerYǥgM"D]T4+ַ>շm\˵XiR~nFA*)'ex5R{ۦ5cmͪ!\7k9ռ1$gziP9Eg0ش7==zӝhHϠ3ǟ7}z%X®xm|۷8IaYc6lj?/ܼZ/`luݍUԔr>Z GI9\suTB7WVJ Ϙ-@(P*,Wʣ$c-C&ѨM\ЃrN~]aj&i-B: ts+2b7vd 4Z;v4T͓WGZVۢ4՟kpk/-IT(Mm?OikkN)WgK7 q ~hA &ĉbk> xw#?>z0 dRA]¯cvn24Ebd i"t\5YS M)a nP`Y'IG R1q#lɤ+ioHB4Qx"KOmA\박0dQJZ|YR/^JEMmtQÙR4$<+,Cb{Z[B Je%uB_7𲬔5PRb)Q%(猳BS*Ŗlyl4$-mqqDTp{t%_T 8Gk*G N.PĠ)Gx{6Z9ETk 9m~ kN5ۓUSmԶzrXQ LWHQ[==wYZJ}NǣTnQٵ»sҭתHbc0F X Iф|g\'<  ]s@/ كcLٝ* EH&8eÏ< V91%+ q[0u!=AOcd1%WLa"8CTt!g4ػdp%'o BT^z!7+H )í"y8pM^m^ ?W&Ug⻧X+,⑓O9{roQrBPBL>Y=>IsӜSOsVd tA@A<;ttK:GJgnB\dsÜ쌷B JgBA\˧ϗU0ǗaQ<۹iǹ+buc#ΦcTW5ٌO5lHw[2sԠ)@N4@ѳ1΀1 dK[ǮNsU sC׽Nm1K(P   V1m @`jĸBrJlEE2)oS$Ӂ` E2)ku ŹO%#Tj@) PfL)ïx%la߆ѹE2D 2^7o,dw2[ȒLnuz>Q*Qjm%EĀTPg:2߆wBA`ӂAսca0A1`<,cb#Yi`LԲ:Sf$e(궭F)۶Q}ۦ:ĐaݶJ`\뷱gXda(WFGx%JPҐ%YNDU1.%&sV e,vTfcm_K|qŪ!#?8 fq_Wit_ߪl&0[v߿~![_ԯa>?D,/yH/!bȟ> *8ԭ@dT/U._,]/FΙYbblc?b\XKXu '9pՇy%O|X~65*a}.? )cb 0 wM% fFAe!庫2xl$uT܅m߄ߍ_+_,f[޿ޛW#-X@~s"c'4W"iGOs,7(rǃA EOWp{{^XT뾱~P {źh2z|$oZM@]4_k3ޏznr- ~E*45SW:6; DY' FcbMtB(37PD&N;]upxiCw!-\I] Q]If>gȦ'D̾Ρ ;jz@kސ^E[R%[T-(DX 㦚?~YijK E'q:}5 l>FB2e>/KMDob|/w~Q(DQ;+/c-nMy'/1^1PD/\] d!߹fٔgSoy7 trŻ. :mۻL&z!,;7R}Ĺdx\N7xSf+5wqInCXwn-6%g@45tG^7c*yϥ%"M37!z@ޣ~7>&]6HN{(c@eUqWLzw%*o!D QBh'V.:@-z0bC8r 5p-`%z%A(aEJ/I{GB$zR+sE]?{ǍzA`m%G'3z-~,vm]/V_UQ(7JH IMpIt GuN)`P'5ƣ8>O=jR"zJB?bMĺ/QkL3q@Ϯ}@oRYҟCY)e3!X-PEqҹnO*eGb@)` ,1|< Qh75c'V@s({@_&sO}9cX@iwA4ݮV6O4߭N=qu'nb5ً߯B? ="/V1\(y,(R(yi|J_/fDbBol↲qYժDȪ5.Ik oc{A34Uݿ=-?4n մuA.8| k ~`8Қ{6d2l<iz *`~{3AQ{n͎=Mv=?}k}MmXwп/~[m23 [qx}oc-GhO9Lp%9>zsH=.Fj;`JE uBtX̰׳I[0sRjJhEqNVj%}`u,5Lͩ59w`< y"PCBT{$ǦJ_2q8kB,F %C@1 ˜6X\ 8-Zo5ЁI2+lQ`/i%Sg CQTsʜzf5@7oF4Qt)DըEbApKvNj)XbcH-Í OU*Qa$[Ulclxx j#;6,uJ|s+9%~$fTwp?+6%QaHKF9!J"HgS'bG',!FHK} f{?6("F};?A.Ck0G!xQNjh_ܴ $Ӫ.))yq;meWq񖤦W'X|a#GuW)!,+AOb{]" {\(b߳4#9SV~=-ljߡg"X>ƹR;{ zC.).^͆\oÐ=y$w5:"DZτuT8)1k0ڛ:#]S/,M)AU&JQ.AA{ p=be-V^w@+Cu`$k,pѐvoAoE*Qj0ˊY"jZ zNkDLj.tV(&pB e@u4 sNJhb3WڴħvdANG-hQ;KC. 5h9vGF^=ХYg3K!JJF2XU`oXk}S4/vA JaSHJ<8|AtC%Udx%JI4fﱗ_h|9f !,646@C.c oxZqyPR*5&@105$%ps𢦨 aAO(sFwbµwU[A‰rK$h)6Ƈt9M9t&!&K> Ha䤩5 !WGpEE0:T:Iz8WI"YIЫ|,z >C0Kr7+,iVNNa>_:+aG^! 0#Mb$ $ȤȥTL7'1mNbJKNĴ!cILYeKӥ \3w (3rj NQbc{W`],8t&x  !.5? tR ygk҉Hxa64yc=!,}*70,;7"j%8H}dkZޭT)S6cah2V*8ѻ a!߹TkfM`zޭT)S6GƲɼ[XD6|&ۦ т4Q $nߓ`H0_RT2|5õq},V7?ӹ탋xK\AWObEB<k]^~⛾2O_D\bw3I̧]=|6n;ڸeN%DcT2 !ޣv㔱 نg@2*HGiM%$VZ"^QˏyGUeRZv6${# #  uHoZ f@ MVl0CS90~J}4h1Ub:ƒ^a]jLQKfn<@h"Jg@%8`>J0jO!10cdHhY ~ug[+#ZyRJ)qӚ2Y II]Ҿeî? $c)Oc{.ᜋQ݀$oWu1\M7W>qJo}5>`bj&+8T쪢!`*o8G{+VN.vAI&CQ? 8sϺFP!)`%)2Ywq)62=1#Ήr!N!j^i)E+BqŒSͼ5yً8Cg.yBlMc8F1w%a'NY$(Mն"`A;oI0O$Em`Tc[QFY%LZx{5ҫ4-%ƬU?67W_Cw 1Pv%c}rڇTzij5DX׮yaWᆐu.@obu_[u'ˋ b?p)VVbmoO4;!=!Md.n͚_ʬܟ LP T$E {H1l #EGEjLeED+ THS9"F;}؟z y7jvuAuAR!ƈĚY&NyZ,3na"!A$u5(BMTfV2M^YB&X2iD)q q6\2jᯢO GmWP KJ]v[@w 5Ӵnw+:+ṫ-b6kZ bjE?{Wܸ_ (U[˷\f;%ۙ]_7%-Ԍ2cQ$x` eOng6S-٧˫Fk> 3`>2DNʱLb- q;EfAd1^f]0}ϳo/ʳ'-|~͟i)ͯW}Ll)O&qKTd?}=1 c56WG8ri|a)!(>m>xrTL~Df`Z f[3R bS¡2.@;.89Zbڿ5$♞jgz^֩@|wgzZjDq,Ǡ1hw ztɈ #Ii v G: P4p-=H~^*'@0V\ChBX"6u!Qٶnin8}DB|7e%us=V\ _)=sppK Cx&_|яxB0As&H+~f?u.Wwq|PN?uzۯ@1\p$iKu1wnӁBp_N-0ȝIbvaVq]ogHe0酈[%ngom{ԌNę%d$|z~/)dR.*:,1QmZ!.1*k3)8dr0/ȇ<'Cok8bNbs0[&`nd./{*㆛k`:v6A QeNFd#XeJ6*8I?'./Xp٤LDhЁ.[5\2цM`:~8`n觘@BtBD43S<:7%]y%r @W$Q,H-~xTvaO-2tYnt<7iDd :9|y=w _ke"n.7]I[v/r4N -AL[kd>2b!$Lg̠vor! 4_igvyHd",  K`#xi.gs5ˑ񄺥31:x}:84n.N^ tp3]Eޡ}Y@DpRe\alc@w^]' Q84})┏|EJ̕"9t>)`S: f!/dH)YT PS:8kob R5_2^ au5SAAҴEpOin\sf 4B_Ir|d&[mAM.L!լΨ9\r(Z\V1j <%.JkiӅX;Rk$٬a]wM;$M=_ImS[RlϏ>V{Zi&BfkU>}ZyRMFOR(`w!M[DiC ̜A<1Aoձ ܿGwDې'7ZCO  $WDzi݌Sn]y:]ĺj̺uKn}hu dlԲ*nmAlbE,0{M"MÞX()H0=XiV']+_QI ƂEm4&h1S`7 %SZ8+ ӹ/F[! m6AԃXm.c6I[2ڼZU)[X%@NBKQiy$-UItzYZku|ZUrrZzejU\~SRӴT_: -^RzJCb%}݅cŷcқYć/y0 =A1Q/7xm#7+)k1X H5eNFёeJ6(%GLƉwo6`OoߧdtPVX!WWu)b4gZ9Q.Z‘28((F#T|.pINP+tfuCN2JmedV,e2ny4fJs Йu=D&Ɖ Z)5}0})A7qam7m/mv`9AT@b환 $zf7IHD #w A_ qј\0|;NK3&*8ąΫ8Nӗ"(Ow:Z4!Xſ5qaHp/S@2"ܡ9[5yD>ςb`5aWYXO jb7Z}+T/Tk*Ӯ6E6E LҒjعS봵T&n$J@ղjqctNvC5gF0GstE3tu-:j?('nzY:0YFR)SKQSCsC(:*FpK*C=I*2$Vq#kx$Uu8y¸1 pB5;m_ .{qQDG*;[Vǒلʠ:kEg{P6*ʤ{{=G4IlD)TrAJ\ZtzA4K9pZB1g6BVjMh971 (/ϭ͘ W_܌G_d۵YI{637Cs<|QoX朖@C@#wF{n9 .Up&-Z,/ؘxzRKҟ !yQ`k&f*sd4 3 6)~VWH'*M^q $J̝:L"fRU2ܘnH];u}Z? l3>AL5 Wv#4mo`7Kgru\$g 8L:fT}oX= xSrdt6ڻGHOX>YYZ-|ŦC>mt']CчVZW|3iIa^?j砽5I9X_=w A3ʨsݝݹ,,r}'EШπ5!o;ϣP-*݋he'~cTkF|bT-ڻׯ|0Ke4uhM5Rdi $Rc(/jyuu ' RaޗuWNTICe_#o2Ƴ Vms~nV1<~;%wy{~pWו÷MwBrw?-TבJ}ݿ?T r{M\IT~y~r}ӓ|*ED1o:/J=֕)MxQ[Dև|*Ei/Fn]y:]ĺx6uރhА/\E[֌q6rR#zm;zE"ķe3mZmv7"QXE8$ Eֹ*P˧X,eYie2^nmZzZ `6 = -=DNsSR [p%08_ hV׵"qXZ`Dߐg<}% ]$&=7 T0^9L-= PP˜I?0{>{e~ ODoJoxCb:! p[F5%3q c.tzȹn^Y^qZQKFR)!<ՉnVcRfsgUeNF`X8}gٴ 2#^ĤIn6qc }$x$E'SlBd3$j)%6Ma;})15aAlrm=Wo6M#]ũk2zgo$[QZa 4iA.D9E׍<SjsyÿϢNv|OU5h~Oۋ8`\诬%H0c0s]c/+#GE`e )O3`X6dVʪ̬nݖәq1dD|A(T ؛tM: <]Xu,WeS6u#*ʎG-Ղh|;eVڊ1a*Uԓ"PVR{AA&*V\&`1@TzTFKM P-[s\1sK}Q erki_!˻?!zFӹq\%KDb` It㸴Nt:2xLMCnҨ I"ɇʑL x%r Fm!D $zBIPN/eeL*"3DFj䘃Js[db+J'Ww[ˀB@(v``JdH"L('5bk% ZyJf,Ŷd27 b{0V{ߩ) +4>ޞ4S9\;Pg7TZP?O܉Aa18B@UΊHkeu ;HI2~ k:~ez2hNtړZ\tကJQl7l16RٰE9|aogfRvgWш-E3􅫅s}O77"骋T ޼(}h2>i6Hp75P7=:OCs=2k!/ g~ )^hy5 %fxdL8wX*(iJ+oDY0"pȍW:PGxa8Lcl;T$IFB)ÝZ6(6[aIkPCQX&< iCGwX;"J\i:hL+g1q:}T;'`J[M e\QABcaFzZ5㇝0H f$ e(W* uFgaDFb*,F@As uZ[2J* ^ԈP9gn`粕 2iY&>WAR6@EydVU*E(3ZZ|FOF(mҖ}S2Ҡ܍uRiSqV|O$3Z fÿ;k͈ƧtP$+2͒y$`.>4#5 i8(m2^fP[z8ieic顔/,/@i3Fm]OyMs)*,}bfP00?c;Nlg_5k6ZPwJS,e8I( /xZvU7fohdj#ܠD~ 5LMc]Ɖ'6e;զYC~A+7H Ж$@9_eg)nyY?1QJT빯?aEjFnYZ$֚4L fγ< ge@ oFL\&2"OQ<&.XSL}>@,_W é3e8`jW] ?U֦Sͱ;H*7svzsjNjT+%A/اZΥi@uۿkk͍tоvnrВBgsuo܇[~/K=GY2wd+|o<6Ÿp(Dw'?$2ʞooucMMέ"yN&11PpQ:6|&ZeSNݺEjePb:ݎ}%vnhm Mʦ:hqz79wA~cw;)BKwo(nSX7n=R6t9db秠Jp8t,^11iMEkF4?b\% b9DY4@a$ 1;@M֏/?5˶5ZÔL¯S,%9;i")JQ ,R痫 -8ӛ?*@}{+k@S=2}|hor Sf?3ÛP؃ 1iTt,D#ȟ =r55йɡgJ6m\)KΔִۜEfbu{yH~mcobe8M]l?dkF1pI0&#RF axU8 g|j~~1'tv|6e KEvZzp\rM8] =>]O\/=WRk\5/ٝ>~u._RRۆ4@I֙mz.$0$/1h!tR1=MFU'I#Ui5HBs8 K}C!FKRƩ15l MBۥE}tZĿ imlA!M!E4F`QqBe GldAt ~𿒻CȲMmz/пrqyOӑ5:KG;8 !@Νڈt씓a ZWP0H,qm]I+jT!QN^Dw*/+F $0ր8("XSiPNTtPH@J5Rktu$j5E%#*8me*/)8>]l磳úyR WJB >}sKw}k:̿|Ju IrVAjY5zV]=P§+Oɔ/< kjPyՀQ̇F]s碗9 ٩*ޚ].'q:R2qRuJq_f(afᆰƅoKl4-FA3It041LHQ$I=:Ki {\(D V,[_`S}LKisY\>v;υ$Pi*Z 3E1!γoyE J!u9".7\%M<_Cp"f7,^5UCAù K0&i((h@zYKIxD2C !%(n2"j*؍AuZEixջoo ϔk3 D48| T+|͡Bv)etU9]YUzak,|]akn.4쭫m`H6@lhBp-A5-ZH^flu;j$NaŹ^sdNFn@!f]GYjMF_uo :]`T,C]63Ce f0j$]5ֽd IJk&ˎ12hdT6}t[ -aWHt2MM9L"$fO Ӆ~ْtvwM?$p"f'Otyqz87Ic$]G#(OzXJtErpLn{xSD{I:U~lDn$.u&)IgAIHJ jۨ#Qb[" B1J|. [t6sR]N Wp9祐JL;F͒ cZ}?$ ]m]~S/=& ׅ!I"bjבC@ݯ=lsL5C#8]B )g9}Oo*%w.||KbCAATғJE]15ZQDX֋s~k\ʘcᲪqgh7NQu{ kYq¾8}*eӋ7@v]gT9pnR {GhVhg^H)ɑ Ӣp!A罱Co|FҞ"@o.)#rzZO{fhR{Q1 "::'YN]h[&'X[:l5X!&gG`|kcmdܜ(Q ylZ>y\`l*(y0ia{C y}aH_^_92k(_;VJ#6*ix1L#BY#HrbY$MWVtͰ).ܰh[bQ *vdcVNh8ej)u4uԎAMgVQ'+Z}ɺ2F]@܆wtOwŌ6Ũ gZݜ5eeя.Oݢ- ,u > htyzɵcՌy$-H{X)6T>w6}{bլwyղլ5=brZ $!k9[O`LrX[gZˈӏ %,92 q<~7Hgc >R&Q4N .$ 95*}^v iI#`AXuo Xz l0LFW 2 &λ՟yÎkCuӾsuyuM/z3;a@/sL/$B4g?W$U+Vy)4P: )}MY~e\kݫ8>&FKsғR<)%HRJjjғR%֡t-$TrVTo~{kz-֜`0sꌼHeΎSzChSL)Ga K,ld:KD{u U/b{2 EEh&s[TKi`[TKFn|Gͯo$Ծi]բJlr· KOWxi95|nR Nt X1i7๶j4i{=x ޠVV!:%[_]1!’\t-zE;GyU;kcyp0e*4ۤXyW~0䤅Tu^'bvUGfj\udf=yc2R/׳20Y;en-""2Ƚ.{(:LThR-{wx@%kXg=:<ݰ\E U$e ~0 R&yJd66\v*3\DOTj6g)=m)̛/NKVQH)PvP3}ӖLs"&dJiku[vZ Q6F{!0m[rڴhGUmi`wK{g 3jz}uzgKfs]16&}zR*MWVRJɓRS=K)J)dRt(2uiu,,yR'-QHsP_CxғRʼ;܇Rʼ; s)ebai\&DFkjQpF`AOGTXl[Ԍ7wk:UaG@ $5?#6YZm&v!j6kZ oȋ CVv\i7o|0l﹋|ODYd8"'=l<'J& z[G}d\!l46jhe4p֖6Y+Ѣæ€H"MכEhwqgC뺣ZC{יX GB xK@R^T ,I>Qs㠘@2 E-|{7jY/MSWJUVQBU;Ԙ5{foL{PL3^~3oE'nXvaij\c=7AYE&xw6ԞU`vMc\]o8bEd7F}w_7jۤ]᱒*~9i[JI y%.:}54'4w7|k}hIpߤc&7阿OUhu Mm5Z6G3Jp* A Ԧ M+Wgc?bKm&עNƵk?!3Z.jQOrTbK'Fl㙣 5B vFN*H52iV :TbT l"* ry'> @a#^gĨ iju9Mrd"}{oeCT1B;M*vHzcn㩬u@VidZzog&>6I; ?LMG׍_Y=},Oë^uW??ZRmϿ ݏ|!ot%WZ.]Fr{wȏ_}~'Eףo}H-DujM}wmO^ B%V㗏^_5O4ɋu/]QwKaMM 3%p3H Б(v '9-|$1{=-Sr}Zx%0XS)mt=?oLf\w|Y}6x?s=oGG.ZmP=33#BfH\ T{ $%cb8)WvxY伿sGHh}P ~{b̻V!Ϭw<}0#/9E{\&فӳsq; 8[3yz[,p]Hj7"h`%i smi|ns :[h5>557f`al8UUF]N]THC=pR*9ph֏I , JP[:Zѻ emS]mo6+rSY408%bl0&NX䖑*WG=Y761Sx7Nbb:nt)yDևq-)9s׻)4 VA>#ǻ%ۻs%z>,䍛hgbH2zNXNz~)a9Ibj?am'Y'IXkoLyҹ49أSck`tu<( `܍WsFLkĩ`lKM+'9ĻԗRc=Oy53@x&a/I}-5g+==+ttݕ1:+R#g+=m+,J1k+bfR;fDV*$%/NJJRFxIX)CiVJgӶRrR< +qi+D􄭔%r/^ªܖf=4$:p"6?ԻN 3(D P茭hntDceKH)qpJqF{/AbPhY)4x4#̀ >nP` ߂,ShEFW\WNexv&-IR# Yܓ_kΙ҆8a*)5vK~ƫ-hFT)BdddguB9r n:#QӌCTd-h*w֣}~lccg; qNX`o mwV÷'4`jYYRYb YFZ8. 5)z5UZsKt-=rD`*!\x Lim;b 9(2}(cDk2BXkf psAL[]Jsk>SyVb;Q Cw޹A_,pa0d(@9q%wc_ݻ\}n21>߅(Q!Z[D_ջS˵B{;$Ƞo>#61 b򛵟z~tN `;s K8Cv77^OQ+7",ly#_[mMW"(8rĸK 7o naoJkޤEû^Y+xo̳7?zoCpIc%cXo3Ę⦖ }O&ԦJ=Lyo5)M0(IDvɪsN*YPU٧0YxU1 $3 Ϟ&9.޿/WuF|>10 vwv7J&HS9]5>!,b;RWWpuqWL.JPq_*z]9ڂJ|r_J24ʛ5ok8ÿ\r A[-:f+% @GPJC_yFv] #t}0TR1m֍ `;YU+@תrTy-Se!j"+Mb\XX7iCtyP2֠]F1[Qͅ/e SSZJk0!FA$>8UE0H<vaj53h2kXe=ֆ7,LR ZR8=b*5 mȜNn zQBB\e=YG ロ s%B]&ZW Bgd@ְ 8fl8Ұ %B:Ā8mk=\?Bpa/NOʇ "WsBDC̦{@H UeFXŶNeFl5/,mZs%]<_Xax8&zp-n O)}Fwsϱ[y6~wB޸6%.PRmsJ'ީ_YW9WDJT_\,嶴^iz*@踺CU UdtLڔwVN)%U V7cemSRJc9b橱]Qq ]'AHPvfEQ?̾jAeuU|Dj[wCss_bGJT-p]6uvY"+ٙd&ċma_?Iƻ NŒG$3rX1*VLB96rɢl(tvAa Jői6κ:-z:{ωdBxe38Q`< c#WDU)/dJ7's9&PyՈXk0Ɗ֎xZ X!8D瘄>.kl@A<ŨkPҋhevX㽰p2dC if^1vAlJ؎f EJab@"?98%CAL.Z].6IsbSKVbR2Q,"{ɠ/KNqA8@E8}G.9 "LGxKxIIJ,DEFӛNyP\u~E^T}YZN_6vGE",%+0ں*HK{T,L,pf˻1  ޯ {U_ϖnyE s!k͂uYխC 4fBԏ fm|1ɪ>{`O^^R s8$WJ,m9*{絰߻r4nĺaࣳ'|.*#?|ur>=|YE)g)b 6RT0$Ku aDC0v5ܼ{fZ`$PEa)ep ;2sVcuq V9Zw=:{Z!_o8zu},]o:)<=D^b,mB$ECZr.BH Loc/ P/ѝc`K)7^`) #il1~"6w.֜ j aJBP#ڼ] \X4W KV:؁0IiJ;0b`ͫf-\8O?mnAb:ZZF=g[S8e 1QdZ=«M1z3s)4ÔO#m:Op)ŀ\$܆bK1hÂ5%R ִ&x5Ouf/.y.(ˁ:#Pԋ8ygtM챐Cp !l^o[c|x 웧V}_6k֞hz{J caӯwmmH~E@^v66fm'Œ; /) s$%BĖNUŪbQz4gcϏI??sx}piQ+[ԨH +ДJ*dqs:0;\=tK?fRDƍUj20k!:hюh{YCR,Lju|oHC=+r$f@rq9"3qrom5钭J;:!y[@a) }C$8<"B@FQa )[YJ+MsUX&#Q@7ʂH1aY^Ur ?P<Ӹg= Q _$QB(:~R/bD{8#2&2+L8:bw S1a44_'ֳ`l`PMLgPxqe(nSBek[=J ݅rjy8}h@(jcӕ(ƋtyKՇvf&/TL aS3h7ˇ9*flP!8(l;Yۤ"YL-ʬS<՚TAeqchH2qИ9ck( R{qԑ%^i sTn-f';+wߪL1"[+y1ϧ}V a\{y$Ё;K{#c-mO=Uu#L225/YoBF 2]`RQ L;!0/%B#|_Ӄz2 |$*$1!~fp&yW&zhP>;D%̜bNrY=_Q5Uxγ49rL{j@yG6 +vmBI }bmRȐV;ɜu%sް!\2yhOm-|Kd ye+ szc%hC19>,j-ο߁jh_P}K"9"@H.n^D . $G}3& @uV" 0OB46؄QV67gLBGk5AY"ɂ s>ft~vӎW\ s- !-@8. p?ɏ\px`g$p?V U+w|$MȻLsMJy| 9 PXs ϱr*zjFYEB 65`e1Q0%xJ+_=82^]`zō `Ai";!HY/REP . |3ѝ멻oUq2+!ٚ^gzJA g`'*961\Z<߱@ee=<6PTmE<zJX x85z;zp3JƑj8m=('2# ҃F;Ĝ]uY}Aoqv1 19mg|E8zyv]&G0>=5Eyw'͞KU?@ w*Ԩ)|j?PQHYڈ+:G@ʃRjKt+^CIශYO6b.-$VPjǛ&=i7 MwU׫Sfо^af&&~Xzy:iaNh`w--.KF!$P0 LB# hhEaF;u4"0ƮswK9qyn RNloU}Qɀ% `!Sl,ɭ;n{?Z2bK7TEoR=1";#Ŋbla2BUG-_%Ęj9x# \5ި 'L1 h㊾ %tIsI0#T<2[&$m=:yǘKf:NaUO韄DJSOn1OQ̍E}ld?m!,7Ay㩱 @NSFJ!}0B* hPڠ/i6y9x?+krKQׁ.G 41eZhqe:s4A^OF׮{H]UWxZ\F; >XL1e 3Vhy F'xsb̅ڛ^|Unhhqi To"GfG*Fu\+c*2rJze1t3c߬ۮ oI&oX0Z]m[^q.x86uK,5ՆjɊ\eѝHieJt.`yvfL&1Bs#=zXU6t&)/P_o >z,Q+- v,mbUmmEe[^dlzxce5jJ%gWΟ\ɨQaZ*ќVg]%o(6̈ EEWUŜ ;aCA2ǃF,1GoF,NO)^lA%[~+;ner,srlRĞtͣv+dLB NX;u= Uާy:*b2 DaXp۲-?R苸>mٜZ۲|R Tܠ0gC(|G7xC 37;Xh" *~GYQV6y){8~,_DPרLp8X2c 1PJ\Mç7q\ )^/ݓ`I0O="\G%Q!B7}Frc,7GҜ7l8SZƒʘ<$Ha3%,]Vd@aw%qGo|L|]Ew;yֿ{ʓϛ^G`w3zfn&yzitWw1eկQ8f4'f_~1cj4IoCc;0약E?ϦfhËmxfeD󪚁3o 4,:Fi#K+o}ؿMB2CfYTɂ1k(7|fK{n J1F%ޘ8ƨ\ɠ p 45:P-Y MՆ{(j N(Æ*\0yHA- 1N8p?ՔǏ~TGLNE$?Ô˧_Ow[(ďf;׭}gGn[ȈTzNvyQ 6p,880~dFmXD ?( o4֛Z,F_onR Y9XJճ^ȝ`Ru9=1QzMnhRp7Ӑh:+|?#/˲Ha_?yHMqXFT47ѯ30z1[nOv&Ϳ=U< avqD10[4yֻDbξ_I#Ne=Py̵eU2Hv ,gRN=;K:;.@杖W H^1iFC6eA2qܵ+M V2-x1]B|ўΧ*՜1|6'mn1GK3mTX:<-exLBKQajlKO[K!C/ >BǟQ+yRK9Bvw%2d^Lm7Q u ٗ{ۉᬅ.^xj2| nbP]p9pmt!t1j۳0 :-MD]vTHlpWJ$2X=…풝s> xWRT6h1bmT0;h.٪n6MP}2KF Q: Y6F)֝f"$N9v/ãP$s/{t..u1|}́#߭3cA%|g.A,}S*%9ʜbZ9̙1Ch1pksk$mrv9J-w^PJ,8#2tt''X$  Ŋ96m:QUZ)U=bLd0"jΟ f8n>@t#,Ong]=zQgYVdɨR]$Q( XJ* V-8MnKmx $cTs>ߚ7q L' RDf 19~9zL }n@:mw.Av]r3مʞݕ. 3K咋Fo!AU;P.T-fDhj_C龄2HCQ*C_߃30# aЍ7(8KsҖ%z5ƑNXY/o=qQEw|`dA+#nР!Z0:1Zvb!cr I'T 0i'Z@| A1GD%Bh2&Z&LN]^7n;cHls9=VK6ɗ P4TλLhuN0vHQ`gJCrfƎ8,Y(ƨ:MԁAs CM=4 g7oR' p;S)aͧ2(7U9At>"UpitQ>owӾ?L1jM A)?_; Po'vM#c=+ 0:ɖGjGOQP$Yٶ8)QLN%[${[}ҁs?Lβԑ@ a>Q`&9󸟩~q{JCRKوm[NoOǴ~h&&iO4ft*K$QO]M[MELF6DRbw%QqxvL|XL(K _O@䷫a56IFZzh$b>J20Pۻw%o ʴ}>Xu]2Jpxq2*fݝ7gpR3J-y%a cܖ[Sd0+5N1^ SsXʰ,!ޯo\V|`7"K!AiL#xN@fRRcε ))Uf'sQ/wN']E)CH]s4a/V8BRq P*O:`b )O&8?C7?|GDț1y6MfA9B|ƚ,QQ;Th<" E>xtp̎GC{8q}=Tg}VWPAqJ2FeynyUgif[t].6Ty2%B˩4h:]|N@39fjS0@ '.2%m\+|.e:X7WG(;\ >O[ Bv;3FK2aHMSt?flߏ"jIv<+YصKyyOfr,kM x%%*i],X6"&QFfwG[?BV<nM,Q;΀7IN>?Y-Lj[g't: >::%ؽɕ<; 8J6V}rpR෫\t>C0NJxР]A aRGT2±NĚQFߟNueR, R ݐ+k3.҄Ij:ޫ+70}x(ѬlG#\ER2doAמ\v$M 0'kV>mTF*Ω(ޕ"Os#\^M7+4ͤ2Ҋ*YC*we˅F!X5ID#@01RK]^h2 ߭OTF>HR~o3 Og(ŋ-a oo !k3ܾqhlf,iM!!L+-}=C)$hƔML*ӄ H;WN`يBfU-vpFbr݁S-af RwG R"Ax,Iն `6pT鰇fsԤm67]&U*hm& I#0F^y[瑸oEó77nB,Z٘,}`72Yvd}0NMU)Y%gJ+$U!^rĊeDeSfW^ sl?!mT#ք`b"aoPt<#RGsR0fRY,]Č`Lɻۀ2U5㹆-有҇^Q&xb9LYSwnFµ:wt1ߊNu7Vx?d+ 7ɩ}8X#sw,JI.OoW2ԌIX跽pKJXhn\b7Yw[U.8Ī_Xzxy~#iy'Ƌ}V1ENP|{X!`~0 ʞLux3Vyg݆u'uVc ;v̴[8pKBg&;X;VHi}Y#e5"kQLWYY8b;/xcMesdKsdqӰ!w ôALrRIAHヵZ- xceKv:P!>3`oϿ̦;DkUs>@)Xk,mW4#*)%H"߿z!҉<z/qpx#L'oro yjbSA  Eo R6σS0•,q0eQ(ȗI=~潙Ϙ XvvࣵMg^{'|AԻaX$7yJ yʼn&)aQkq;52n1J 4 )7qZQ$ZWrl_S e)bO8T6*cioV͌޿ehzug(6`Ƒ戧\'rC+R9q¹\鬝 "Y){*֔}3S֌2(bŃZ =.* HJ֜@Q0_O)&ZMBCȻt*l"[SQS\w oB+0|EM:lB)Pu-8L(.wPlZI܂ R)aY`rI [je l.2ͮו:Lh˺_^*zRCAiћ_|W*h8fZr!2o]/7$W"Mj\2pD17bCqQUwD۩K̅QZL­CpV#P] Y0ptx9d~h-^f )H:L,RgZj} 8|6; c ɼGzx~j{I2gB|q/:JN 9ήBc N2&\ޞg"-d4/8}w_nM'rIB( Irk:9xnA[чlLgFd TX= O+y{ɷx ff2>yzoyXn\gb`p \r tO:6\ E"y; h7a44 ?UA)>G}pt{Ib璺“~,*T_= VavY 0&g#×>׊/\ƃ].|6S 8G'#3<̼]8Vgf _ϋ3rx~y?}3=7&Ss9>Ma /;aYqhe ʦŏ 4e+k%۩ ]ސ˵${ zc ~n}v5>2~0xٍopox vN|?/Esyy/$_ݸp"=$S)HﷇO<~-+gf4GًifWF ϓd!Nҏ3\<&/z6wfC0[ 4{Pto|iv)< h"_+<}:cG97Ogo~N[% O 2i)cwˈriRWR#~;lZ]wmI_!e/7L?_!v#@JXYRHʻa!b7g8qdÞ_UWWujΖ}`\3Yh^]SWٝ}HX*g xNh VF]SF%l=XyMpr^ȄH;l;|ۇrҫNM (YQ"-!_+SWEUji!GqDz8%~4A6]9WM2> D1KG&K׼G;?2xq0_spSJZ6cq0qU0VʚY-#O`X>[d`sd<9KFr/E)"l8u<@FsJ(RPE'6"ll{8K}Q Wr~ ԬD:u⛰ ל7ֈZz~Z&KR+TʆZV$zǩJDȓ =]R<_ @B@Tks{jV;" >! Ptw32F'T֟5BlS^t7}[RP6 xh4QRJs)FkZ'TĀBNSfYvު: d7{P)bTD#FpFՖ)kQ>8ÌLځ(N#-;M9βK 4rj&?-n(}7=`naT\V.0SG>ŝ9JVb'V^jK^iP[,r-f6j WVuswo߶!2Y3 oؔ1 {v+&2ϛ1FC }z7aCXt0XK߷D[CɔᾇGw7T)AK^/gsv.Ȩboki9(8w۫++Mlx øn \+Y)Zvο-Ew ShlV_en3 Q@y=WoxO@{ң0[S~؟Us`ONNM_Lzj@ ehU>9Vsu7 ֜ou`{/ ֭I`Y'-ZzuprȾNO/Yx)Cm`#kTt @jOz@w Q,{М4ȑ8Y4:\dUID\X!J[l$J TFCHb7O` x|oSa _nഡ[!UAԆ( r6ڠ Hd%QZ]m @Y=@*mj3BO`)eQm7>\蘠s@47w}KdjEvAk۟L㠹nN%GgpTbs+7]C>6vM^eN^M!3΅ Dkp9%7#6uPeFùNb#(n-7S*50՜qp޽KwG5AL&8VhCuT?L$|N^C0 =43xE>l2ȹm|+Ȭ T6v|nY29H\Z[ 56<¬t9nϕ $gd,鉨b89 VYN)7 wmQU1= [zΕH瞻g,8:}6S$yӧy3 #J g$Om3gRsb('D5 H&r!f?O[?v3Ž}ȼ fRNcT۠RY,R#Cyr{NJtOΤTOqMJvysTr9Wp$d_'73(̔&k7}9E/( # D̫qA\Q z$Q~0H!Ҝ=zvA7^TKW釁Ou1:߭өHhvᠱrg@/p~,־2yb5リʶN.:}  dMUJMGn92Egu3'&ĕI#B79 q9HttN%s$6EG&N7mO릣NuO/jf?N-}h w2U-?~vHF*#;bJ ޕf2į|h<[.nrE-ѱ(c`|j&+}ve.oCc)\ׂb9eݚNFV;a`ĝw O85ds*noxP^R ̋Փ+R2Z4HE@,Ie: ZwGLnC Ya9I^IYƳ$gIϺҸ5u#Z58e"xNy*c QQr&?_n0PaE'7> v.>tIh9YK+ ky(.Ԕ )CϬR^ǚ Ũ T֒ *\3f8TmhgƤ=aWn]ZK[kur`/ߐMzƁo# @0D8%tO̟"4-T L<;9Wh?q@ƍWwUݜw/nP0P$r 1fZ+Hљ"Q 8AI"Duc)D-gOB ̔K.5fFj |@; eRxߝ+w+2īEkN.8?=7{v}dqz8MBsSn4?ǩ~A%l"Y!dtH|i!:[,VT$@уW˲)1xyM^MQntg}0a?T+_8,]~eZ3 w_r;?]̯gb~ ^*IFb>m_Mx[J76=v_eA@Y5e8KMRSԔ۔{š| ƞPe*ѕvM^^8i*r6' Ǐl׵PdRQ2hdkeJ^:^\0M҄(T  {ZsvCj]J="qd۸PB rJgj9Z&#:%* q><,vs(kMt?^*(4W;`e{cDq>Rˊ)88P?J 9Ter1!==O>@:\V7ʵ~}O>+һsà;Pk>FZ(e&ZJVs̀XHu!5Gە>@M&Tխoq[q[&6ֲ#O #nSAbFja ͸иjSrj/'S_JNcdQ@I]4n .l#h4e~^''N(Yce%{,Cg:Cݻ6咬}D;d) w!%C \d/ꕬka2EL]<ϵ QOq, 4muT&J*=9hFwĀ6Mc8%;=ȕSќѬJI HED1C h: ' &#jNz%Њ'l7h$(n0Zܞ"<<q:y`,4ݎ:G;|d灵Od VQrNo,%NŐ> 5n-4JΒų/*O}F8F2[oMgNOqwS=F Q/y`:<)er=Q(9OͥuYvq7y<[a2O2JtNc/<-/:,&Oknsa䏰׈*hи{ \^,f[/f_:"r^U֫XUCH9X~=*蒄܌ǜ.a>0[:>ht O7isvy3J$=4"eG_-GeMs"/z't& eJ ́}a<8r~]m6+ <{b\bp`}ΧE9{&=Nr#=3nEX|X,%-XNi^Wצh&J`h_D5p(ȁ9rZ2s_ɖDTnۊgNV 2C RERCĜ#قOn MB9D;㞷<`|, lnkp!asq` wz߿2|ę9.QR &hmuyNr!.)Xr}' qkbNZ|ƓYiϵ >hQYDĤ%RV.uE-KyNk;^_/o__]A AŌв(E` X b/fDiYB 0KLX/;|,V@AH64Qrr7nct)͂Y$b1}IľD)@Ȉ= ZF| !$00rJF: m8@Jq$+D0Էu"b%.dNP&)R|Ť%kyĪ`V f3|:~ov no4Ə+i-{#vTFf2RQ$cV^# 嘦w夏kA ~&gjrg#ŷPH`Bq[k]ɓƂ{QT'ןO{"?b8k+Ru~^KWi(R* $QWCkI){`N-pc$y]RS+ gZE:iݫj}us]\r :s/>0pr #8U; ~^%ܭ/QpIdB. Ⱥ\  r+Թ? g00eT`V)QVsbG@$\ZL-%ONd'P-0lr !`cVZWTK^(l+E,*kfXUa+ X2 R&~rs0ϣy (z`ތoMO' j޷'TKHs1?ַ{cCKּӻf4Π+_o4mooŗ}'d ~~عoQRQ}5G\lsZna},))I0 CfdJ7?ʝ v y" S@<{DA5GPb":sh VSGn nCH3KԓeAG5 !__"X C&gQLd?D(Ti@0@/,^F{plE0ދ`(˗}Ny <_x!^\>U70g ~:4v/mo_~Sm0 6_D" 4& 9;+Mi:xs][_[/3t0z}˂hZ) 5KF +qi0?=k9˯;N\hn&\zc4b^]k}DX}D( ʞb! pN`ds"EDs}0hti>}\٭OB=!>oŇoÇq_ rDͩMC%İMfƶאԞLEXbVn6T~Ƞt *!=]m1f-3lxݘYO>MruϼVcF?<ڥye6m|[ >jxl*SljLȊNGa #G{1WUB|.PmOE|gf=?u(?άfsd8e fCc$P4%S/gmopw˧%DxOBWuz{H~~|S&>I [ߌz &~$>xuGo`>95/߮b \ybJ9LnwL^Ɣ qkqs DwUs AHѽ9bBN<`CH3$lS&zjT BD'u6nBFn nCH3✵< ? |>hվ_פ1z/Y~p KzpCNmOd:)*H@܇un;SkDAE,)2,sHgi*u]onϖPz'D)Y)ݳ4k]}ZZ+-~WoR%vEfY_Wz}{c] ^9|+H ZrPEՒ SLiRD{ՀB>[W腰 `DǷ~~ch82%\/]Ǯ^Y@AOHN b 6f藝Ĉ.G<q.rەW7>mj.^ůꪪ-L +BӅƕp(PXIJNТts[_e R*C NV/nn)Mbca O (G8H@5}N]ڄ.X/J3aK"*MHUh'aO;1[ |Y:p?J͎9L/` Y G;`>qxh ! Bphc_{ w Ag q|C-sri>#p7FXSKBDT%/A{Wh0s!]=Sf<;[ Isؒ lcKfdBu:2^A[۾x:mgg[za; aPXxXzçc!S0r35ssDLvI7hf?dzǥ6&Ͷv50`|KasW<0Ä `fʺѣ󱙌 7!] 1 c3~K0_dzYO>eS x n?ft3jo%w?ٹ;U@s>| c@$i(r$i2 sA:We#BebĘP`:T!*D%+.#De`| [vMZ?g2ޚpG2ڡv) iv!;E>oGׅFp37Gi-]oGW~R}aVK6A|x>,ƒGRvC"e3UU]/sA1={;_'agO%qQ]-b,ϝ/Xq@zL#t>~:Xu"v!On{o ƖOEXX4EXX4(`QA>dũ$,JkDτ3y (4h~.џY-auw3wdbgs/ ]ןV\B0t$T'r0?OV^!Ouc ׃ebUߣѸtX=޺wI؎Vq"tP͒rNC@ 0Wz쀓l$$d(Lz$sl(Nn?~ײ{O¾$l< Q(Ou'ORtZ8fWs6!@&xsNv`rr}DN_wXM;4b%c,vV”}a`%wsWYH.wE͵)'JݤUUp621k}$v%W]Z]! 3RH lwrA6~+M[پo7CŃ-S=[9ӌz}Ө7UiX曗ZHؐw{<^]|4av1?|1cp(S "$8.YZEƜX,M$ \z|vۜw}L S# 36;-؏_F Пq|)2 \\жj2ŷSڒXS|+T3$Z7)$`ͭ7>3?Ø؉xDZ*$}M/ LнZW= ,>'0xMH4*)h rF NOLJz~/iΌ?íbݍfu43X 5C~:yy}wLI= 8"*Պ  fJ)ƨdNb[Tx8EO-v+3aέ'Hda8!p,1yGX3L1%4GVL΋YʾBe9pbF oo;u LkZviXg#P RοNJUw킩B4c#a""h %}(#'vFpidyw6A^U"U;#]Ӥ !/`"`U9 X/Ez䰧<ཧy˩f;cyt+&cf-{ ik-JyO?m+O?b6󄭽Gus#sejwt&Wo:Z8ꨂ N%ݽw)'-t2)'5_FJQA Cs8.-pK} ؿRJrZa:ԏ(hJY5> #\2e8᭒:Xj&+y*S\V *FWG!*@vt'9FI{ 1+sq Yf`^('UJ+pfqG!2L3=:+7VTG#[j `',b/vRb- =H"M nBTqM-`Ab%#aT̅@4 et)SXiD5Y j i 4Xd1;ֹw]b | ]͛ etwTj~8{eZĮmh+WF:^nTlhȃ Q `~<c64h2fC UNQtu( Euu;\|2U{njАW:QNqgHk*[>kvd9 ,s8xB D5N)tH]0K ;X"ro`Qgs3^u#O?s3F {Lž|<#}ͳwrv=|+?mqG Q~&`rS`>:ѮdJԠY)U ϬW=*1AqOg*9L|\y[y.ɛ])c͑q\g'wH(ӛ' BCg?TbRh7JI#PMh'qc9 D&1SƂIr29Μ72a`vU&OS6fgYX"Y0$ "<]oLJ'houp|Y}s8_h}&&׉׏yXͩu32Z&Q-qSunNnzjys7'ӵ1mw&Jc%hu (E&?Z s+VMJJ㉃}J C/d?$H8GU$X$Krtg Ʉg3{L+SґVYQxV֞bX9XҬq8qu2ºYzcL..6_Ǿ3L>K&}ݜz[zj7  6_Ms_1J[X bv x v0OGEphPfXݖV~UJ^+ɟƾqōɟj,}{eޔjU0Tk9;鸎$nwGz2Mga+m$!)CbnC0_ϸ4YpQYܢ,jG(N&p+ 3'06 \7=Ayܺ\~LK1NM"crW'/X*%zwCU3'qF ՊOEv4- 8R\iiN5 CJR>+Egw+E&R4JTV(ƺN*5x |o6,aF0A2|Ikf<*3KSbV*OA{ Z֧l|] RB&OB˚c4N @wT8 B|JhCC^);VS I$,(Iib'-8qļǐNuj$=aO: L mT_XeN[Kӎ]WΏBKLҜj(;kIk)O84-ͩWSR'rgzƑ_1eQ*^*~ _hbʻ !uRTMkвY[;{U꽶uU˅[(+RЁoe,Rlу?i7u(UڰݥPͤ}J4C2MZMWT"y)=v*41>vBm-%%.%#RE`$]4 еFfتLdT]_6?(Nh\izsG]4!]vHCzwHQ+˱&Hc=u:9GbX<8Zc( QLbEEڶuQN7#:+cہa֡K|"[$%V1vYDftwڨNJs׎Pb2Et/']$X>2,XI^>V֏kU\#&LIC'%L)R`xB`Cqi`\2õVi&eEE~.OdJ ;XwKS0lQPJ*OaNzrVPAf6TJWQW\#P4\ŨL7OoPʢ *SåL7dLTQ)L8ǩVv,=r+M=YWQ6KNN/bKϬ| YuB 0cEF$O{_uGsÜ*d. 'j*kܵT"n&fC.$X(m&lQr*7r5䜰Vx4>R򆽐-:ẽXA6I<:wq).Xy鸉3n!a'&[ e k 2!7Px )N1妛Ko GYnAxBǴp1rF5Y5 `e^acDDP ٲ=b%'(Zf}:Tp=؜umT΢u;>߬ЖzO;뛧g~2&:Ш@ON? us; ™ſMSjNCWs]م@+!`AWD:Xq||R3N]@U%BF>N]W\)BT:Zé&4*G:IGd0݅ձe=*1jT'~|@5fG3V%>~fr>Gz&G3>l(G3|&:ʦ>}w#D z2(1ߨn^Ilm|WwBqeSƜ|лYuʠt~ƻ-Ej66лMa!߸){0Ӕ>VYC\?EKQiOy)QўVSKJdL¼Z>(i2« ֍9!ߤ"ݻ]n27*{'3#$ Nh(kZ8 !^rh 89~bAᒌ,[/o \iZml蓍-1/B1$Frj?E /©-hFV$A;Z\$Br&`VDb^a J\`XC5[VC_lbu3d#0W^v@J)^h#%F vj[H]%A B6ѲʁtI:x<\:Hth%]ELګLVd+ĨH~|`5X5Ȇm9>b=:baARI'iV'6?-;C`dz.|Z)V RIBR@Woc-}/JEL 4#ciWǃj#\0 dgy,p%gmC;ؖz@<>ic@]8zo~ 7jJYjX,E-C)J$ OFs79eOx;U DtO+Lx!W ПC4vc `"Ј d'],t 1ڦhPu6EIA@DY{o%_X3:bh-qk6%䞫O]}s.zr۳9OD>YsP }ҝ}z)1No߼iV"ְ}sӾswqv?m?#/}륯W9\SlOF?N7]w;ky[L3vAnٽ 뮾^\ߌ}Jnu) *E~.<_ +,4/ۗsD:/آ2i8IƤR1V)L3ƒZ EhvB ٗTo2-^UB _-J]A/w(WP(ԣ';q~!O朡L~{9Mbwnxw:Mmk^l9:"''D^pƱ΢х.ԅ^JM Y)?-Wb"mɗۧӎxDQ8(c`)@mqʢ7>6~h()T%+Kb Hq'yF~ԗ[OP.&;AKBep765a `@E8J]RDh&D.$x] ],f!6w]l8pl)/.yþTQ!h";p&VǨ3BfdEΧ7XtHڠQrc@)xޢZr}LtՎKj`S^U{oHX%&[%\pG0yXbFFhB::“I5۹WCDD]]l<3CV4Jv)pMRD́oSRc]YsG+P݇"0abd_e)q\FvoV7H5AFh@O&*+ݼѻ;.H;AU.O4tY*2: 5q YEq 'NK#%VGq⤩қ7fx4SUQiр`6,2 NA:F[" -EP!BJ,6  ɫR>ZC^݇ S^ui8/*@]-*P'u(L0t㭧9"),Xʟ\cXeZE_sڦ۴B6iL-"=7%DAXElM@3`/\'L1̸h{4{5Fw˸ղkO"Di Kwi-xl6r%c6 ;7IF.HDW<YYu 8$ c:HW@d+XR/;K?9i  -BІH'q⢴?^̗bLȢ츲2;v18M8m'Bio䍦Rǀ¹$RuN'O)Sy'o~_^}/A 665\:LpN޼N*U$wS'~CW|@+=݇.2x|VDJEpPX#,->L{*WJ%8"峷+kf%}?RAܾNCܕFZ~:ZsޭMI}Cp1&pg&༁ ''1h156֢L@DN\wJb#C`W"x0F)kˍ! #DظvF`- B Qo0"Z2XdHo cI^y(?C*Ap!Z|!&?$^oʪ/S\TK&־33巫\%<|6tP&˷'^Y(+)*w8>R$d&Fy+#b޵|viʆ:y*jw'V`IN[%pp!1_UX-pp</ +;Rˈ ņ1T%v5}*OS-WJYXXfzRU$/oR1u\߇b>aLE'mrzWPhA)@}KIp+O,KAcK@c3$E"ԡ9`XUoUeUl[~oVW{ -S@V"<D+cTH6}T{#X#"x Uשpັ, MV^;JSًSash*lK}gg H^3`vj`|b5!HZiwߧH؉Vḇ)esDx N{<D"a&?)~~Kr>*#~=T' 1X˛yr4صXc}ި·|m}̅˦ DDǾ̝o;˿?y}mgYe;vfTsgEH[iB $#rĽ:2D|"p@,Z{+7XgmE2 pS}Lç ghȋ d9#D9?&lvǍJ+D\eN/ĕ ŵsgvz' >+WϏomD]^ŘܨT8Vd&V sT[到?޿;\wU+, (-}or 7._fweW2ynULù2]s4E:zYFaQKoc{ۘl6OwՆjM$sNĭP){6=6uA@NZ=wf|m6=F5T\Z'Ux.sL}|S+\ 3 gZi!xdI WXS"4r%ESY[jI$:[y5sSjU*L2oyC@lN PXbT.#f{)3Q2K,'Ў>.|Uc^\;Xs/!v02fZ[g E{ , â.G,6JBqet,'Vsy/G/Z4a p+fC!Zi-Ų޻uf3q4Xgw"⚨a)ia8FF FFf#7J6yobDۧ-D m("zX`٢Hb[DkQ0";aH {T$^ ֮AOkPW/=%/zӓOWC ,ೄN˒2A>ZMnv8|q#>2p=bX HS(&R05sޗ9nD|X30PpV$՚B θBLpEE-O'Μ<+4iPY'ES jHYLض 'AslƇPNOٝ3z[cWΜ4f`YocNc+;۶g|lБ)J/K9ϒZ8]q&W"NR6|0R"6 X '3Ԓ}Of/>ƈ{BXQl:OQBK"ɀz_b5<{}%Btq!ݴy=0\>0>YۀW|d!XSimŹUJA&3%7?/G_g],?}a9mv]t:m5 ^E:}je7-|a_mYI=%C % $ex(g(sU ͈E;ge>{;fVwX[# ޶qF|PA(E{. ՚`#v 8UQiǼ^VhuKXc☱Sv)xTOAqf)޳uSgudPx7.B9X#5`H(:μN 0@2=tأ@Vh,Ņ+v}3芏6|5F=^V>UiJF5 IלjT<#Eq.N4{AT>g qH ;/@W X$/뛙u#V|\O&_|! ,݊XqvbĮ|[jgdST6b*@-DR$dކzMGkT ]?ݧՠ`.ەB2+<M>{7 AO~{~:y9Òv|b쑣,&dEF*ѝTwG0cY Tֆ9ꚵ#\: ԰&%b(>4kTRȇT(H X(3s-oPD9?Xp#@J/ۃ-}xaa`6IMdOpQJI~Al4'8p4TLc)u aGYk{M5k+knHϸ[|8G+ey^e)aL\߬H6q 4TШ2+ʬmXU\~RWtׇHebGiҽ8ZִLOGvq9}4<3:# d a(c(+BPL;"%w%bp2\Am6~4v؆lxִiLSY4%ÍNEkrD=h/^B6¯ެycuFUSmuCoώAϮAohv3|?mKYі"ֆ1X{oȋ?3mJQf?7潈N!8jKGwj8~( h혉Suf ߓCT񥭵F=!qYq*؋#%ݣԹw=pC<.; 2:iMNF5)1 qZIϤOD#m!x)>,52Rmnإq 5mدBaFT:{mڂwH4&Ho|p*ARJynrŧfIQVZRd5 șD7=ÏZ(د/rBRbPɤr}Y%_FXNrJϢn(g`IK¤-l9Uk%oyڞsI Tx.Gr*8RiJ x`սoz=m">I"M?O۷SLFiZ' <5u77'ǫܵ~?)kOڱOV$HR$e6D֧I CQu#*6 vZȭՋZȭՋjk:5mF1%o 1:ל}D\( VsO8uD8Ϩy#"DKZQJ'9n  TzIb&bGQ+n Gv?AڿfISr<؏EiJOۿ(14Jw~ci_פSҠe{˾/ GP?)ݞދ"{z/{z#a2*]Ŝ49fQRpU$9@`ׂ%`y_s> s'O7zAߎ:!FN,f=;OrO~h|E[lY_tQR9h{e\("G9ʸFUM& d0Cl6 Fc9TȓK-MoHt|mLqةѼ/T0ZPéNifu:59tMt+mboi2ͽf:uBZn^1Xp ɈiBp>p>\dlx,3 =HvpBj,:K4I))_t`8E

d@/*R>:ލ˩ G)"-t(69|=1ig*jCD#q*_Qe"P)eE( I&fd\ђן- :ԟHypI& X^Y"EI b,,& @ɨZkb6BJo\\q/!B1]#A|7$9՝p;ֈ߽ Ls8{yW F! cOw9YV<΢78ӛn'n//{"C\ 9v4g}zj<%ln< w;S\ p`3U rn<֣R vw»pPss]_И4MA# zC dK IET˂zQY̅12+pjJHa`jV4@ǐaH&0t`eAwѠ6)cre˂6Xa8mA%5Mp^ِrs"1h6 rF5ϩ$Hq~#Cr9(d @$rcieP,gw.kiT ^kryk ])ur}Tfbκǜө4i>ePQ\:Td$ZxÙ$jP\!+MfO@-*B#, >!]8(bFk fmY,h$U!+^ FI Z$1JEEElA3i'Ʉ`d&hEJ F/ Zmhe+)qXLxv)/ω# 5JA: Cpx^))k [$VZj9b U-pY`O:ae>bBJIbRkT4r^a.8o=TD51T7$D][֡!L ( QGorU = U= 4(j9OjaiO;e%6!WP%wάTR~Α;=pebb dKr:YTKBb-:GIhR 6 6$)2Eb9tI hؿ}\b ]* UB70-\\|6p[}}sʣzn~ɐIkM>gftiMyWD-WRЀH%8`ĀSNmذn%-3]8BATPၒ\_sGh>UeflGXX7̒5L&S?7̓QՅ֙15GZYrfƙ-u8q>W{rj 2wf('[tk]#NI>.kN:)ƻNt{}畭u4krC2/EYos F0Fl3daB\iV$mJ0mB,6J$& dX2Zh@ƙ&6Bb9Bo9yT%ܔ=oϿ;p6g2Aed6K<*+)hT՝m[0O:#T#׀ȼHxܠ(`(8հQMA"PpLaWBk@XgXZ '|8ه@m%;TyL闄~V]Ma*$<7*##rJD"(fJِi?7!F Tdq ?mљMA tCn}2Q I K!Ft8"䖆k-gA㣋CfzDP,zfoLS[;EY_T I! @ L0C6wިx`kS ∩ۨ6jz hr=~e?g * `+y]B I1>,9m pKPn -m?X϶$GO#{?E;|{`qPۂVETݎpr7OXXOMwWη,u~<\םhlQ% mstwokw7~OϺa>)^:|\MӏΥ}w$d9hf.:fF7 s_Y!c6<>.`ko49 o@@r+P}>K&<:oq8*ig$WO"t0k~n\Ϻ~unr_dQ{e'|c:`~ZFov)/w)ewuJqEF肕T&`z{]F ?)=(O4.f̻_ \6G"q9/=5ޫ')eMחR.3N7_Cm5*@ Cya}()t6{|uUh NW;Zly]58 &Y:Y -DP@$S"r)`$#7ZRAd *f->f>jA\֨168$FM脤KW0frF#8ڄjS9u_+P;ĸbh[߿yWd܌q͋1dVo.ۛSw酝>κ>˾{5\W{/Jr7u#/nfZ#DQDۋJSr\Ў\ ;ǫ}_\|ΰ[LZkP(s 0ŀId 9Ds^S䐭 n6vj6N*j}CXjԋ5IbxB( fO'Z:ς=_wsy '@5 nlD*f7^Ch≒m 7IsQ'^ut#f誛tC .<DF9o6!&OOb7}tyM>d(Q+0^'F.:o!ZOJCٲ8bRT6J̵_DL CWFӷӷl$gw6**OCH3fuK(R_0,zﶧd}= ;"Ub5nMt>zM]e%z^89~g 5hS8UquPi.aziRq\f= 0 x39*5s-[NjjX]x,2<~}\'9C3DLf"g0Zbw@͚0֌K rL)V>Nsz| Ҳy{B^'!r-6>=;bNQ{P:)2(ܼ@׻)|/pFY!ۇC_!Gs6X B-q^~z'{qaiM\CKcj`gu:N|;"-+B|z l^zB*2WNE,U*;%O_:o!\?xQ:PQ˩>ZҪv 8\|ې-H=69˧$KnGpumFw נx dFSQohԽLgEOQGv[r.$b5wU'}΢n:,oQkĀ4ZŐ5{T>樯 }j3)'J}؂aUw:/kңb|"a4)=@J)M#_Xx;M F^N߾?`Mm@JR5ޡi *C/}J>30[ OP֣lk9MXޤ"%:Ɩ}_c}_c9NfGq/5Mu_bWpj)YTjA֓S{ji4-A]Èڰ5fzq}(B}i}ž{٣N$53 9ɄFHߴ9N^}RC{{o/ǭWF^wNI&^z ^k$;ghmtSY>g1h#k"u\YBNv>OtS@ 7td]9MQ(w k x_- F7TBcgl uWa!E%?>g wQQ׳+s#uԫԨGWFWc~ӿB9zWY캶^9CIz08'Z')!96-P(,"9%;YVVֿ/qB]]ul6`q30xvONBA.ƾYQwv6Ӛ`]=9hgԿnJ}YB2HuξFo6?'ݯ[U4fOW.O->[ žyhNhK] png'[ QCu.Sފ|f#>y6<3f4)śȯ|ۗ!BgͤBnX%c{ZhqH\ @L)XI9[wQ(XFٹ4^J\!6(бaȮڄawUg-`M~VƋ7VtƗߖ%Sjҳ_5&Ij]w-ȭkq4H7M7c\ww4ϮhƔ mX{luʅ )Db=7 lGZ M"TV_wo9>>rBYc:JpshhZLxL[*Q(|5.B#k=*'YTد@)hnHQ\t2Y/v$e-69do~/_5(`z2Ką q/~K ':t,5pﮏE#V7ŔZIsq2><4K_Μ;:,MhIX҉fsT{Rl%Y9I-\J5_T͗B5ħ/Oih7.^Q2F^;/^*xFNad]zPB0M͝_ Ki=6ʨd#' N!k"u"F9)/n<J@ CsJF㨝z0YۨmR Qa`JZ5׆Q0 Ew&va/VèQ|53pPJT}MThIbxFU[C_bƝU_&l_H_xa0Uy1z1t忾+6*O55w5-~yV#:\qc^_SL5=FI$׽uކNMsMWj l|lITy{ɬ;b"i]prlP;|;z0L{ kށGk8sv>̙T >ZC&\×CZ` K*Rd@t& IJ}좓VֈiɈF94YB?{WG`RXxbw^O7(Ge:+Plc.E{ Moű9PY0ݙ5ρb{(jw˿ٙ\&imBpt;xY. =x)`c`zڇp".hELcFzB`"U<:4uMT]1Z*esP5 6ҬX l3ً CU,k=UX&whZ֘*~(=I+ne>:R8GWfBJf|g/7T?OQz|οw}Ӽ5r]Vff~\q~ aY݇p_<˰X7;_ƝT?L]5O=Vv jjUqX뙘Ye+jn?wirm?c'99>wM|)C- ? nj{BX} =xit 6Gd: 苈v&%W@iDr겗s)QALq%DAVyQY+4 ENTSkjMCV2C(x,V/,VZx#J6#s/4ԇZR!yck " 1Cgcq_S="Hj8GUf0^[К-4LFGUtx؎RLrYow׵]\1aVeqy;]T.)Lc4NLy)Vp~c؜nS_k4ia'X'Jf`߯I5)XHBM XhûSm<76r }͆zztC/6R XlL{|ft5y)" g}p?.-0!%ôL؊tL]Z`-D]: T!Cer8㝕3ZѠ,mVut8/jc X|G](n{S;1ڱOB?<!= i/6%Mdk9"=YBQ Hterj G ,|kߦZIBLUƝއ 2"R9c _Ku56s^5M8ub,kM"MSo ׻L5Se~ʲG-pxy+=oږ82 R7a(BP l*]05)$:% GtL-$EI&ùV2vAF 9?utb4z*o.hQ=DBldobDGt +DiHyFDD5.}뮒]%J^w& `D8 AMC{W\C|2 ^[@b8yF_P'wa EI^9' _޶Nf2oKdm $`InkT2Hàuo;֋94KentVpA` !/P"F^;ڰzy 9Xj{| HPB _\Z}zphk*Uwgn|JVJO\ɚ`[w_KYtUVOAe5Lù\+1z*(HthהnÍ_3ޓ6wxQ#]!Er$Ubž}wG컣xvȀo#IKKwznnw|/C!85*$$xAV{Oc5a# gJZkEͥ4h%H,Ѵ Iڒ u^l/]6` g.`K\*@vOlG^T_ul4:W@UK51a 1 MA0f!ju]3m,F DVPqM1%?$"133/'ueѼi;E*Ќʉ 6df"=K_;}; ɱpw-^s&F_h}މ]t GT99)AL%)V0o[mLXTml)R^"^IzxL%s:%˵db}Y )md7WOnb^`Q.ab ψgkiJv>"jZ +t+)GAi[.įeaRm A(ڔjAi%Ic\b! Դ8L)!iByAҥTz<޶da) 82ci7Xy7Ɋ91Tq֞*4lmPTIiMH JC SPMzٗzN$P:ߣ5>gp6r:O+VҧSOu?i2_D}Z1}uq_~ !V!W2޴, ƞ4~竗Z;~/jRxwA@m]*.@pZily\[ͭ}m|EȜF{)2 py_Hmzν9'/\ [Mh6`eUnu%? {%W+EJב'lՔ٫'d N0$cN`<ˬsAO*Sf2$J'd )J$e3'bARV.(RV駘D˱d {( 0c U MFʵh ?j<[G!⑊r|?abI|PVJ! gf@ymd1YG\ul@,WfvٜnTyoaRSsM=YNY hs/dyOSVF>4V[_ Q+o^]#je=Q׀z|KZQ1Jr{FI^==wJrEc6MHw Qf2@B%@É捅iDtm?sMƏ5V_Km'3,s xzt5n&# N/ AX:ƘK}Ќꌷ(* t&VJGzK'6hn_z눸v9~J.Vswd&L |yP }uvv4ڃ Cd0QH[y +k)S dkl$'ݭ<1#lisHg0;|.T{#.̳grif>*U=@y/bJ{݆œuF輍VZCO3st!cOϱAZVq;%F .wuH,߿5`W,RH%arEm6L2SBP@Z2?N8cuQJ(e|XcźX%{Y+Q3 v:z?T;GGCQsQ1o9J[>q-L˵)*HC\ŒԸtYd<'7|^ʋw'|ZnR8nHc~Fhak^CőU]7eɹXVF .+UW29$.Tk.9-Du9Ioݴ|_> 4{ǩ7K5ځ^$5"H`JQ>,9/3Ow,h4(,BB~JWlU~}!P #Hce i4yvKsH 㷤BK ǀFb;^!I}0EC | Fl_tt&94E74׌<1RcRc-6<`Ipt1&cNjjGNZq^WD}oJʡPJptj߶RS׫ٍg.Lʧ(imeeI51 qC5P:/M2Nk,hS}Vڲ,ήAay{/gmtj4#,qkipkԒrTO/T\O%X<YTzi(gg:YgxUpD4XqG.4(HF2wIvi<+~%e>@=B|E._H0FJh4x٥T_w6]:RJ7+f!^@ s$nܬ1F8E2OIFɔZZi2{+m~ԟ˕/;ȉiL3H<X<qB C0cPZ7&cEn(4XP 0Ah!WIFs۷:~?dK!ˢR[H'Yb;^/Iu`󚍹am+Kߣ=ܼ\LOMٯ_^Դ6 3^qϬָBӗ)_@ڸ̇8 5.{?͘Vln⋇[*.)F*~*GbB\GCvhxr0_j/^-T%4.KV4\@S k 5֍ 5&– .kH#k=)E3~)m]@R%՜5bH϶sI9QVi]ZQ @iAW[wT%t!NfNJxdQ#gAR*:5dem*W՞s׀h5+Ml8KhnEDߊٵJS'ypͣWpΡdiHxV~6~~@1]ǔRqW+ Eɔ=P{njmMB3fsg;̩>z6/>~M~e(?2BlwЭF U/prʰ>}1Ah`tHK\w ~{ռփtysS>:6 @/ , DF\ff;*F`{ nXCvP<2&\QD5.TK5yW3ZMg~{B]T3ѭ\[Սy+ۃ#~{nQi6u6x}rܸs6?}ǩ=%`Ą h-&G83;~"r0GLh5`=Rl ʡϭQoO,e3Tۚ^ P}ppҶxzt-n&#SK<|UFb\34QV~ 8s?<@ި:j=p"*/9ܖA)Ϡa@<.)B8P4Y)T5v^Ӄ} 6rrp|[/X=Fg\#)yM$N5ٺTDo>6 <Ŷ\_-Oeeh?~CB͓ۇzq{JdJ ݇n?t7ay䣡,^ZaѮ1mf>W*OzY |#if-U.2,wnI6ժ;uzͻIn)xTĘN;x^^U2wKn9,wnI6twS[ bL'}1wR 5wK!n9,wn6%_iNwCŌJFbU;$ss\tE2X -HY6TQ%:QBrTiUY[Q*#m#IԎ+n(uSW!(ObnA$ ۜjL]Y-9H`tl9†^{ڭt9ONPꕛ]b*qs޾G YؠP'b$(~LN- / }@5qA)a ٔeRYsKK1oܩW>2wqg(myw@һlj.4c"3eb  `z:sr1e_ёac=Q[&\du2e&tLAV JI܎ 4D"ᔙ0M x J8A_$-jBoW?֍x&sٙ_(d^4<ۢjf662?)Oaj~Ex(8B* 5xUv%je(TfXak1ʺk>>\-jiBE*R[>\-,Vk_ÔaY:t+7ol (HT( QKyM[)pN| pνC?#n !յviH 35s%TUQ lǧ' y ع:)DIkB~K3{cሥU;5 }-oP3Y1 »apaJ%  幙#&8S ,90X @V6G֊RhJg L 3:ӌSl[b;{Uc^HT$F*Ƶ9J>n܄sBЁ`VcFΌ̮ w/ڽ p0 ԣŝ?IyN̕ea'mM8P*Oж0&gt%hj 465 v!ahoO*ח*zUHs|ZCRd$#DI}W2!N$G~xxzt/,}5 3^G}61F4cj;́g 3f1p0sٌ=fd fmЭ7y~񇧷.^*w)&@`T[!;x,Oe$3rBXjf}YyMu8Z =S _nݯ &`hGI׀]_NfK ;S^^1a;we\<,?@&ڂ8~?S81)4wf}A]j"Ƅ"s5:^fKa_(goVrhX-lkЋrl#q!e4EBD>&%*Q(c'$p]@xzY@h/ -^Fk2F[xt-߻KCK!ڡG5FVwdr1V~dLeL5T82p`I>suYα!j֓KbNsZIl).PO(mq_.hTZ,|y쵨@H>Ij=,hcsًai;'+o#I_vȺ]KÎM0j 6Eh3YH6j@!`*ˬ̪ۥhKj~Ɏqϗ{;):&a3.\~i,:+.8tɚWcM~o'ϣ=3 Mt[| 15* t)c-Q%^l x%;k|p1AyhW`'?*دTS .j?7=qr=E2 v؇?s*00\[:GBvx%/ɻR֍o:.v񜲹!e޹n\Npw%[HQ4*g;9NnPn4c?>7|f6.K4x?R'".+wĹocnO[ȥ0_a12z(KZhBv: _JNJџVWg̰(X"xC92.c'iFQ*~ ln*ƒ<ը,(ﲍ ^T.ھ|TghT'B)y(Zm]Qelύ$sFYmNE~7PzͤEB`%Tzݰ@xpdLYfʇR# Qj#qoRJ(JOG}߼B,>D u4pr;B-Awq IE$yTMR7UG0ɍ/R.p:4 ItJ-q1 3!gRuAӉ RsʧA͆ _9O2o(4`i/mAY.KŃ,4V_ڀ+˪1RK\?'s@Q汌hA=t 4rU : -IO"=@VL%F#kh$L'NgoGU:Ly{U>W}7p5cɇ)`^Og-ߍSGP}Yg2AVs"`13o7=[R42}SuMzIdHN +OaǝWgx;y)3R cV/;"Nx,+ڐDGlc5wU-Ur JCg&'|z,>M ˦ϖ/S,kő R Ni >Huu:3^@49BY؛OxJ?Z ~ALOw#L/D6Sx TEb8X[S񫶼)gm޺܅Ś!$j*PCD{mndËVJtIx9 ;{dkFVgER'Arn6;IDbF(eJE<ԳZSC3D%Y)@:jV&8qW\ f=c%D3!)As&2wRgT'?lso!C~\ٵ\5$4 LE ʸե(2eH \#{̴D",#-/dcߤ-&m7-kLZ_XB07֗*wLhH2ԕ6DmQئ5Jm:"Ab˚/o 9_VJ ubјA0{EC, .12FŴ,z2U-䘑g3܊@4W%NXNx꼡ʐ,ITrLǔ͐}6v=z9[N7-8i%TZpU$NmP*^aNܖ8D9yjpzb-u[Z)ʲ@qziP)0nV53dܗ`4(Af EuY4p\A 28{ŘRc8kf xSLڜbJPN)xrq?ayH head$ھbxfHBX[M¬ٚ2+T'A`-Yߛ M-N8g߽*)d0A(S$ A=2\֑ǻ)ٚJDFs4U>fTy<=D )(_mռb՞+7%!53Ud22xb5&(M:^snL8{P<I^9~ﭔ'Z|>-WhȬ@ Vuz=Zgك_P*7M<šb XlkBP1Jϒnw%c:=X9{!hFv͑,xDiCsmVA 39y3|wJ62o?^ـշh1 dRfpQeA g6XTZtptA$ ZKTKVi( tR[ǽż󋶻8RV.=w6=M|>„91>-;V[ٵX\[cD2&y̛yYH!! dˉ[EŤ3vP>q]uv ?ЄaHS86GG¡Tc8z)8t| q4 5ϩ6igאGFaYET 7mQ4yQy1~;mz{x<1pf5 wγ1<Л4xGh2=O%T*ߤHgnB2@6]E0W6OcowjoW} +-I+F2%inLPq ڭ)9t:ڭG~aFڭj6$䕋2@GRAlIKuv*g0tonLhZ||܏>;NDa4v۫_vS_7rs_#+Dg5r@Gbr_rM/skz h^3p$ղzWlw+W _gKOn#v&Z۲;OGfZւ@5iGM 0d34%~!r3ԉ4lP.MS&Af7PIߘͨf_v M^NBY=zef &$ئ+URTPњ6K;F.:KY+9m*"TQX-:j)Kd!jQ9$?b4BS rzQ'GKZ52ڐW.dJiuvkA4v돧߯[ꪹڭ y"H*]iO\nD$iS DU=\5}lւr=XjDǼ]9#l}&IkpcƮ*Պ3s];5L6 TR~3`d PH`~^%:3`(Q0 {z :9FVֿ7!QV>K'}Y$X )DuYz agXmv(k AIsf?*Jm-RR@I]4:zJt6'MPք5ʳ[pD}i2@:.ipGӥWEbt;׾΂΄MNhWR9ws̴; Cd%"l#;e P {|-&Ҳ3%[ꍝ򸧌c:'VâȨq< Чq5L K~m[8]ȥRt{"<7\f=-@P2xSLXs ;$I~󋣭' )N<ڤ%/JFѬ5d7_#~C&M8)=NIFd4VM6`ܜl!cG+7a%xclѡ1&@>X>(&TQɷ*+6Dʼ'3{\ظ-GZl8nkh,rQ;piYɵm%t5)]Gj9 ZXw|w=[Wưzm~QJt+'KgA Bk!B{Y"_ħ˕⢁mn|xgmwfE`]QVNn+- &`Ϙ4,+ :B ,Dqj=sYÁ}hV{T7?շ NaS=w0"@q 'f6op7\m#epɆf Lemuܴ)X֜>!BSVn[xHZ)>@2B4NT:u ƭ2#2dp*$ \Ǭy B O7wKr},~],()Y?jn^= ~090CYߗ3FoZ40*t^8;Q EUx5Ht<3 D}I on(v:Ž`P^Snh~<Ʋʆm.w{e+c_ ؙS~Kx7 zUCfC܌zN-@jCyjZWc<D>^9̤obҍz3[ЇC pW,^S˃Y/fXJZj&A:XꜽFLpİ$M[פqNk4EQ)T{푗&MA e-ްv2S`9*rѤ,[a ±D ̆;\:(x=oB-5p 8ԍӲ/ek'I1^۱}MkJ$GbwdFXXxo}Ӳa$1+MΏ.DE zKY_~v_O>n$.v3Ndt76Th?}gJ1mKX;ĵrU:)@Eq>R&V8e` r4)-+񰏶])Uvr*Z?TyHc[FS,gŔ+3QV5%7'IԴxs7@O{d(#cކFгfBUY(u Yrcq,L},{9LJ=o"ds\Ҋ-6%?gronWwߧAv*f OA߸n$uN]vx!\z(bpA^8~E٤|m0\ Y,B{ G|rt ?вVɩc#1݌]Kuj]R3=P#}?T_]d`X="sq辦JZ|U4]w;Fve[ x~IL[8{E%Ɉ1rqe=[K={+kvΪ_v".;,k"~11h.mh3Twm<fj_-Y=W-tiw.K &)uc}ݎ;SpjYAt5.>: Rօ3a7dS1#a35N=;"j"(UH(.nqph<ÁhDUt,ݽf(~,OyBQQ_$ʫ"=P+0A'$)7WF"mS}?`.=tgDzAyYtٴ8':d͵sfG0A%)*ި ayIZI0MHW/*.% ߘSi|"H座H6'{T6<{Y4ew# Gӽ(u#1'eZ@5M1EXM v(LKF:~T݋ RTU+;h98B1A EL0c9J>VGɲ?h)i  JhpmFH|`<*%,:Yt2GٞsߐLn-h|UIxٴ7iNu=j0pt}79}Jfи{ \ho`{h/'<'ў!s:;!Ҙr\uT@I}$J䙧2&hz zAy%,ẁ'DT3$ZE˄Rh# .yߋ'0" }P Ƥ @RK89:ǰ,5CϰΒ<9POfLfJgչt. NXc$d IhRgh*5#NM(Tw4WmYE!Yy E)B,qʐ}7<;~)p3 ;'EmC;o~۪RI*S?6)K=eX P/͡M-[bO8Y.=6̵缗0x*&wW$^`Q=ـEFYNAcx:ń7ǤO?@Xz;R-]kN0o,7>/~FS.Ygon?G_J<K50 TP ^C#!k#4@Vå@>?ov*HX˿ΰS4~G7# ˸ yƘ#o Z{SSehiz< "xs?iϦt='OM8iNдu`0cHsq;&['toKªҳp،!bʼ^<؈LgLg'cX%8p6'hHyJ_f^$gyǎxcDc$&[ڲS|R^hH^3%cQ║X(餧BIFa@p%1 vÐ@(Ճ=)2i >@9X6FcE!sEQzf9"4Zr][o7+,YE[g6M% Ǐ8A!¹ 9d{1 ˒z*WS^"握-bBSo{cX1 !6ȁ@Ip $8A lMP1ۇ2\/ϣSA0g:\_N<wFwdZACxRǮَn(p(NA"ÞOSFv ?:5N1 @1^kw0 >.^^-@ 1Dٔ{.VsCIA&ǺlTMƏyHW(mTYZ9OtEY_ٷ59RQNs}>~y(Τ{Un'o ~,ǻ&>cto^il0\f8V&zN2Lf/]Y+r\,P'0׶ɐzSIrfd d!_zc[M|11gx>YA-|&cSXHzS`uS~|!^ syN84MKਧLbb8 wILf)&TanynkeAD;DĖsĈCk:ݻC -'I%Lge_7UOM+ ʟ_nӧ9E X:~TzJVO֎kbAh.hȉ|P{|- 4P~tzX>}(r")=N1Af WD'*|RGBt.bʿC"_e4^ ?]R|%xd6Vʨ_x(BI^W_K >Jc7DdH>Oi 0e-ABKv\M}]I_gT\[bLx[pVT=<.ɘ*_ /?֕'͞8J7+/#Da7MDmi;4^(y4rr_}ѹA}YS`8eJY"@U$Te" z+l~qrw6vJ(TŒ:b_Wegr)˟~WNÈӄ!%$+D<2XwQ| F3_o"!svl&XGycG`::tO4K-hPl43#e=z~Kg9Ɵ7¼z\Z}^.Mr:++YobQsGͭ5標A[$'%" \u)D@UEE)D VRHX*RF u5u:ߵF,Vmx>EÏN },&[=VOtDgAq*)&BԴp9<BDF+Brhm߇-%Si+PeY*%8-8Li #USn@iYX rK#<`Yie C,Kh[|/$aXm9P+gPDHE*Y )J̫L(XeL@YeIE&:A'Y|2$щ5"}g- -;`| dHFa[)g1Ew]ڿ jصKEg|&Th<]=4"ZCmJi4{ȳ1`k*LOwʅu .3aZ]Ƅ%>% 0T9£ sZ]ɂsF GīQN}AV˟w͈}[0W<>'MW&ydj3 ¸S$1P.]v^/g4`ے K,`88T}*ymFJ%ABñR9ZNSN'FN˹|0Wyi**BT{"~BY{!e84ܻ W] ~Cd l ء @/$?Xo.6Z-%A EqĖx !n>Q]Cr-ѧth7LRdeB P9Ty0%7>ܖ!\VT۽i2v4 CTeݝNM[G'#GGe"=GJ<=9lTبvvq.DW(~ dgsߋ7x_?Gp#xGpCJfs?[sتfƆHa9C,0g߲W{S.9A?'0kN}1kylA#5clZ2 ,ytxll1jKՖv-ִuH HDrJReDDcJ@ꍪ`jҊ8=ȓGG/E)aƎjOa|ɑI\U.3XdBTJ* @7ph. JQ PV$Ka "AEZzWJ/%ӯH9BU!K(2M)YUf)Kߌ(ȕyM]SnP6٠a-/# g_c؁}O!*m$}D!Op5@㻃]>>Ͼ$Çw1OР'pr e 9Z K[rL)RrEM^ҕJC$r`m^au϶z,DE潧5@ a9}1 Dagׇݨ߰$Gî!'^9 P!b:49[Ð+>d9 nQo͓\u#j{ydlilqiCv`*!бsieS#5genُW-!{`y$05 vBjZ78?;Aŀ8ڻAe̛\u7+USҰ~x9D']/`HĒ*ă7-Mң]=#!ǺFi54/>*;tTEj9Ot\ѿƳoNE9h<+ԍ'o ~ڬ{ǻf%M!3V͋q9l8Z锁mk4^5VЋFו־2o1%4ݕ3CMx M˦ ZšN[ #I}wqW:0w-[ MǦ9]bJy?AGe䖄ryN8 #dZ/o{c4KŶ} ʩ/Sz$,] ?Ri= ԣ},S|B?|T#7N˧Y_os) /tOP]TQp1 2>/ET*A|9-?dnG#)gLv-Ӭ<JQ'6b`>R*;rQ#![kwK1, yud?=l ?3E:M RfZ7'+XBÉ$B&DZt.l:S:ovvӐ?{' =|j=+ȭz3Fx}wl1r/Oi' OڶG!_k) IaplܗdXn/Hp_Mң[HrA~dK[/m/ji &c,>U,b=5m^pgL+'ƟJ`tdb 6'tlĤ&Y_.ت/~;7o4L2:2 (RҢ2 /XzQHxw/j)xw/g# % 4'0ETLXP.w`t,痺u]EomiO?ؠJ+W-m|wM~8y?~)~p <7N!׿^NIqYզmYk' 2b!V$c8ҨX@k)/]޺!r VZ׿8~!jbY4<>9 i8}ajti(1 -\|Fla@ t &Oy3nqw696t3Y߁Q=a^߾5s߲iH*0$94D%V;d&Z8YV{X'BQb0'Ne (p XۡBXЎy%JQ>#:4(GF6^AB+ꅟ8_FTnROyF>b;Q aYWL8Afԍ=WR*ɳ WS#k.KEm xʽPhU)"8i]1wfbVoza~SX0reSNc$ Jfa5yj,g8R4`XqMe\x$j*@[+MzkتRpLa%M-׆PZk6@sa5= C%-[бĐ}գ; g?2n5cͧІjGyα_vz2\{|̬ZHkYX״1bgΎcr 19/Q;EpZ{l k'\ Qh5h\5zkn$ LOkZum=6ֶ?$gt[)oqb-N<#St+kJDQ=֠A`3t3ske{^l{W8tp + .v̛o C¥ T*UP-$c^;Z!G*X`'աt ,zPQXt@Gr J*KQO ,X!RPy*4-(8ѡi1QHŻ+Fݓ<ҎS>o|漡&2SL@^Q)ā7raXL^I%Ŀr闏w&=r<0]cT{1-,ʭJY>o|вQEǩT1-速D oiۧ*_j|t|tt~薼4./59] [6ԃpS-LG:E0n;vZ1]vo a^Q][Ġ6,˿P_3LnrEʩ@V/Ww\}4ׇ>&ru@쪜LT7v]Ouw='Jɞf:A;HG&P;tbL$3 RGF`ݓd)۸QD,H.3ve@Q1AjNonP*T$ SJQx)Xj=<}Cx7\Y_lm0T\%Q%!L0% :BKFB0"y0h=SQ"QE:6s}?&?0< S7Wn}F]9oh1 u\u ׀L|kג#U`=T7?Azl@tE\'E\GCm-Lp`c>=1*49}=܃?Vn?-"Aup!at^*&c l2Ja.=cA'1IԘJZ*k_hIl/w61Tr!{(Eڃ{0D4BZ$.K҉0jƫ[~XBlyꝲ,VyW=TnQJB[lk΀{ 1DSP28,:J_0P@Ct8x:)8:KU-lO)5wsxPykQrx !0fޑSҮIGAt%Yvsea{3P\a{+F" 9lW`q+hsiqmb70sL3fe!taH@A@P60Ҩ6zA&`@@$ kQk 6>w6FĂƵ *tODMqWXjA~"V lE؊vSrWq9C ^I\hJt I3lGߗC8<2r4,}D'$GWyy"zb@ ϕ-RL`.ϬUU)X (1< %꺌X8cxRJlXl'x[~zS_rˇ/"UtnR7w/ڋ᦬r8YAA|wtE2El>y1={NbC)FA& )*97>bf 9wMn*E(9fmGnps_Eq0ނ:ZjATGjvlлrHIkD^ށ^OҲhV9|X9?$8(ӝ?9_g_Zc [~e餠E6Vݤ(gr4>vVE.yUBW-_tE٧?]sy]r]+YRr3qU ܎,7V6}w[[ rLmMIYgڟiez.,76%1[xڟW7k"0EoD^ jN|rf3LbЫ%zlM2&zdl^8L6PyJD_e~Bnkx9X5u4,[Zݥ#LdB\gbH+ f7ݹi5FLb?}|wb[LSIfQA,pW3 7@gq|o1}o3ڭ=5GXgm(S~8xSVŎP=8o{ }V0=:Scuf@}|[uю* YhObӠVe\xK/tQɰrucn7GWs>2n~GOV!J5Q3N:֖$7jUzۑ`h"Iyt<:0 y:D:VoATg^Q]0!Q,z~sz]^[c2Ixxb7L?RXZ@ U۠%&NJniG*Z7K:KsZt0fPww&*)꿾 ]w?~91_B]Eh*GJVehvbeR $߫:aɫJ>3mSwMG$e{{e3Nӵ/3ⷢGFЄe=h3IQ],[b*dLۮBP0X1x L9 5L멒0(^BSIy[ex[:Tۆ HX? <¬p0f]7Z<1p+fhЋ?z" Nj1fGvd1_E\j"$gnq ?ğ]!$ᣠ V5T\/?j4g#c*>Ϊ/6bF` D#i/iYE/l Z UR^bI oE=$1l`Xȅ/!Dpbrڠe0@*T?701.\R8ňĞ߃5 l3EWjd|8AXW(e2ݝ7awt$i|3뻙^7}zuz:LJ%gdѧ ' UB`bDWB3P1aE FxIm}hF;\d%B24PςԢA{ ՠ3 p D&XxFb;JT-DchGG B;Р%u`2֡f2CKirLA+csIF<_FU2|xC` _' q` !7wN&`H%(TWJ ،PTpR wEfA 3l%ﱼ }@;zkPLd RIEd'oVq f2I%rsUm,up:W jzl=/Z5)Y9glmQ$63;9,"j߾8rWJꙛX#m:ogDoC\`[EoHds83d_XgD TXZSќ+qwڅ %b:<~̞`l?l9 fsߴ睿Uv߭Zi1{MS1[v soB {Y`״1b9zZ\Bm]'Zr~Yו{mMÕ>I\誜tF!U_‘skq[G$Hy{2AW K5A^ \t8T/ݬ(Y:/TmXo++Mf"2INNJmea{z~?` Q%!L0֣$AUh)@LD/ g#֪o#X9-A9n\aͺh%yƆƘИm ]1+{1kAYk`i!#^%C' TpZ:U=\yi5,;:ZuU*T$M! (,P 6IPVc7>p"0Ӗ>NLbUXq}h3Iyq = &sc0O=.?N!R1(Xŵ29}W~i/nϋjRLGIm$OI/w~Q*hQmfZH٥ډ)U]iSKEnٻ7n~ v*e  g@`_d犤I]HI3:ZbC =jU,unڄEY2e똀HE4H*#Sk7[(>:F֟x ؙv v!!\Dd\AvZAb#:c4n="(AF;nn1$䕋2Ah=,3>ןlT*-tnꭏh *3~X9!ݬ{%:vA{f%O魯2DnOzϭt]}! Zu!-񰱕PQT7}fB )H6j~w@5v? @85;e;Qt7&c{Owdt65Қ۷Qc{휑h(Ȕp*( wGXh,ܣk*V*Y~ιPbq@Gd}'GVQ@=C] "ҽkULhmuTK[D9F.:~rj6< 3/Pf}^39K!KL ^ @T`.t-,(I]-L1G]dZ4a). ΂8jI\A&iBόj]:TK0JC ĺnl}]Rfˁ5{xݚ;q]IPKsZZiU)JA|U_ A*fz'~}a:@eL*V|vuXυB!H(#:>KD \_,٭&`S8__Jj6\O*ǹU0߷zk }*8KK3μ0f5!:tzŪbÜ`[K+M  !fX ,olsoƥeyGW~Gt~\gRܧΏ=1ߺ;^J?^RH%̔P |ChҟëRxfĐW.A2Uwi_qQ3!1LcxLUAB^_=x%ZNp0u~{$0vs2*yQ$"+&ZsLڮ*I>.?34ϓ '\x %8R30EVaP})EbԺ%G 0l ܩÜRҾ jA=2Ggd\7V/??NQI-Z6B&Dcli͖c3=ǭB̩d!z%"2r-qg@ێè't*$ԁ.na8J |l >mw徸dU#yΊ&iXʲ'ͪ͜͜͜2y I "Pd( ?JSB)K2&|ޯK|;rҫ+fl7w.~^~*v^70]GH|>id "1<1cSA4t{J{, u%i&%x% T +YdD[ј8ЙRqK\dsVyTsAQTQōHK0PJs:MB J^Zm3b}}l63LWUyz?-7/#+ǧ}E!+vTX^n^~kԖ>CLO߈5حaXnnM߽?τ?ܘMyoݝ}x67# )v+{)T@Kp^u+[OwKCRR|/ae6Uu]DY? ='{Op 7V 2N23 VDηe8V;٢tK,&n?>*!2ʁi{5ng)8e 𔂵@%:eyb _-3Ԍg9f3ܮץđ :%ytSB#o㍲ B='Af̐K9 |Μ|Μ|Μ|ΪYxB+:SyΙI(*2%ȭb2R~&YJJƤ}ȨTXJQ]@^"59 й,1יAf!z[;= A]({+2dRN$.YaWvs|{Ξ&k-ul`;Q$Ie2AфVxqRW[LVZ2C̒5vyt5Qе [.6/5jH~J )c\uk@R o#@Bvx,y(OY9E!30%E즔Qj+*)eAAL 3)2n"Q+H9rHahJ.sB hrBhsڧAMhUiBc8n%pgwV**lVeZ}f#\mMq;`vQi>SM^uЮC 2M@zb55R>2"QKiX{ڹ@]U0"PHS%tՖHh|.AKgEN]S]NxYfFZK%Xgy67TdkHy"֔cRݝ+gZδ\˝wMIz7]nMFt&J֋*ZfzsV"#7r-Z&xsk CcsP:~d6FfhA*&1p**RF.A1/!dمLS1$MB0r~eQǛeR|ǝYmedW;DjRk!-{}|v: 祈=#f[<*gڽ?4hzlb ]!VF3Ո`EH g)5!e53ZG.ߡS/ AY(Y$nI &T)ɴ09~e}:YX1_S_U7O*n_zǙ+F(#JEFNabnO;ğM;*8Lp$!o8M4 a˛ *HN"4fI2$#SR7cl8PV6|6x+#bF|bжײ9m]R]0m}E $DgHDC):9p}S  ڱ,8]5+{q饗ϊ.-:hܥYgn蓬G]Vzht@\xeK%YR`߼>ϯc^uɎg/t}YϥqZk]Ԛ]V,)|s[y[| '; l(^za_ͷ*OBz;qZn GNIͮo1'OEe?&yg5@c[m4b?b/7IB+knHzA֑u)0i3$htg}ʏ*2q<%i<-y4}s~^{bhA(=yZ=X׶㦰CM,xB{^.INy \+%xq1.x!P\߳.xkz|'|Őa9"|TN~K?6k6_eaAC|!og|z{O ^k 0\|SLq4(C,mRo㎛J *NJMy~x[3z.X%*vjs㎛*a=Rq/q@&}撕NntAHt>7}FgNQavoЭiqꧻkajTk(ڃ}\6Y%´m-eEl5H 4 6=)dw?&c‡%GxJ3ź3qB3oO&ۏ]?lq|=Yg>kK6^ԣXschKP{kêggt2L}gϻڅ _z'E{ tz.kD{iz&j 3T&jH5Ħbv_ݹw:QiV@Lo.w\ڹ"w/>5GPû֖YWc*.0y󘴛fmuBvF}.xiY̓^d2x_a,X}g!A|5٦m1` x.i-׍Ww_ʰظ 3Pu@L*#0*Q,o>vî.հS䄱kK5ZU+]%wTqńSz$@(oꙣU4`,\|H_=GRRx㑙8LgӛzgWyq|*_ gnub9Ըt]%q5w.7>1ufw+ ‹>c%yЛxs? % ff!d>ΞUe s.( 1(BQz^âձ,䅛h-*=n wA}FvĸE3Vݚnl۵ 8:79pOn/+b5uMhp&HBaO{wۻw%?|./K=ڀIa8hSMq(QY.o*8c&LǑ2dʈGB~u"PkY\]<o/c 'F{m+Ȯev|:R6xH(2~_*(/Ǣ@d\X>3zwx\WGl ]-F[.E:豺ɴ[g~ k ?g1~oURY@S#0n"QHƭyFU&ggҮ*-SU$2C]Hᴩ^ɗ.'gˑIkIxxX׋|F={u9pcf;%*\M<$s!\8_c*z)l=& aEh7EFv?ꜵo^؊0ZRkК?'5B<G%QJO򜎤/1ܔT O51_W}! %#>Gjȷ`:*GpVW/||m׏>'8_?੐@[B~M8CS }*QȚ!9fb%50Si*HS#$OQahI611.u*~kVtO^jZt A_GOgmiZJ!H']m'[00QMuLC!^^3pdA;y1ANzqvhӥԢN7x$vڮ3ԔkaZ̭$\ ^8Ȍĥ00ED1[ӆjk%N E FZRpEBFt9u V8lvu\75;XNF5 ):k%bʼ *Xͦ]ʿCJE;#VSE)ٽ*J_n]q/Syy鈩̎@ ;$#"vnv}O$)s;)p\ DF4&s1sXN <gB. c ,}Ԕ|x?NXˉU Ɲc"HȂ9a59g*K 0X.mԅ+֌qy{>N qYQ~d\-Uū <~-.xK)Y޿{X`^,ot}7CoD-"F?v8ȯgZ0:*~)~ʟgw[L(.٢&(5JoUQIbSP]}Hn*!kY3jyھ oY /RUƽ1KsKBmɝW >\6|k93<¹*>Z%: "SjΌ dRGQ5cT&:];X.VJT]_vjnb(Zw㮛J(],zؙMyÍ ߠ( շ:z#M6}kHTi`C )B?mhl jUI}̭ͭIO:bk&X(zSZ\8hEssћG<'9 RhќƂԣ[mDӐXr :mD+B?R}TA dݞ64 z9N0>\&ersn{.RN.G$P@54wߦ?0^C#~U5m})I Oԭ0w/5qeVWr҂5WNũ\b+1Q\qҔNWjU#oBη.eЬeJPГзI~1%=4O89' ܡk|۷VFU8L}fַDm4Z!q{1D` q,d%Jɜ*p5߁E'wJ:Jj08\jCաPqfVI*d ="‽q 08PGjeUٵiՔ!{2`6xq%zXpYax- E¼s\ΒdU d̅BWbujC{p>Rf|v=,'ޜSOh"W!/b?<-`IpN$(,•N׋Z N>bYX 1un\.w]S=%qHNGj;Ȍ\pjNW #. I>l+"3+ӭ|@U5aE=]H K'ٜZLCrX2.ӕlPh%ʼn ]!ugou^p龹o;UJ{O=u0w9~6}5 JNZ' Ѧ `#eC܀͵CF",_댥Q#,KKԭض1'^*pq@ZR3/rQ ?d8TtCSO18v`Kr9֓׮[0q4:a4(N 11 4b^.kKZ]|&2酌 L@.4_!yB-Fh+8Ugp[dү^CōHjD/2MIXۋZΖxq݁wMztG=dRk/= H"︡D Gy3 mpv&2g.rqJ W76մb<N6ѕY^8TIS2Srst =ύ Fn{\ g]p3-g˯Ie`;q]4􇇷eJ芌L,Ηw=SQ3`<[xw,h4-W;!!WK!2)Dޞsԙ)D> cdGWf1J!5QŀCF{/,n@)F](=2圢sn@j=hNQ<cfQ9df\$+աUHڛ~WVs4n4QvQEhswqmPW WX<:!>DQRr =IJ3!e%@ >p{ѱ港UfELΛ!+QHH>o_;Qʖwz`Dc.S}#՘nꫧt%oS z@OVӞȈ@W=n 1iS~ lÀ)O렡eXMLmPp%Eul+,4vWkPfzR~\h& xql\eb/.b Ⱥ(0c xEq/PȤ&0WbbEE-J_b&ӂr+ݹQ5Z;jd:E r;pVk]BQZ2^50w@+=»fd-] VU[#tF4B{IsP{[!y6*e0yb꘤.d,"P[2p91np.k 8 9;+qNqYR\A_ҊEPdzɂ`: *BZx63k%У;?xph7ʟ>EdK>dm`S1Nn:t; w; EZU4JXiL@ɌKYӭہmCQqVg6a :9CLA'O&@dA]DCWA3A`]|Ҡ pT9`Ў#+Pnn/FQJnjӍ#η Cp$֠gBr"$2>3ƵYrƁ3. y^jeM<ň|xP5Чn6}QFH> Y FKqD*( g{a*~&aQ8-v6`QE0s8zM?[ϖw07iQr$R@C1`օl%O~AScw?ݡ쾊8mutSJ.5o; uQȃ;~7& }Ȟb\%]E J"Ns=m(famMօ)PB]éq9LNS4_'cSF]|u I)?~zx$Z;|X_cŠo?[lN/M]]?dzۧǻǏ<>}?XKt׏_Y}s&׿iIJFTK@6mmzwQPҜ򙯪x@ރ=ռ%IR yѣno'? tMx[V=HE^&)cN^6ql;6)gמ{)I 4\[R; A`6 !$u6i鋊OK_olo^_…Ep6y*|nFaHT^tel{ Ʒ99֪5W;DuuEERҬR\ W1 eלuWa_Uyz2J;p* b4QJ`ړF.FEnhc|h9iT Z<%Yj~Y h?tݰn.eAR1@GV2}|# F[rЊȆS/ BwT<ЬMwә@!"R|G}.ĒKfZ-A~lTJҡ&@KBrpw߾%[4RVՊ0ՖwT{BUU>Q\tr`&nҍ2Zz4w8NyqU`mZ)P^>/lgSmUΊ8߯aZ'^BHɵU{ iHM탕&po)PJ_TDRgRH.㳲h Q\ $|d&r&խ3=,4V*nO"յg4@&#\J jr5Iwm۳ vBQ L @nZ? *²X{h?B˜1z<\p%\bQڃG,]{i:=#c]8G.@VUmGW28ԄE)6C^/Ǎ+|YZ2 9qBmy^i'߯(il5bVkFVXbX}݀p=[S8ۦ,# c_@F̉7 )#U9mH#Ӊ1oؚxґcp~L=?GRη<䊻 ptiҠdЖ~HJa1T&KI&H%;%>lJ-_krDDΩ3*wuE ?}JPCS~8g箦N2otE{1P< 0Q ,Tm4~%.U1l_ @z-"[K}3OPi֪T]yaGBj>}MX[(}m{'0͘ΚIqE-t<,;7Dc1{1pkۢ>P+*Bva VPCca!vCHB>- =kĠj}!%˜uIxo?C{OԬ<݇g3+4 ݐ&EN+&ur^]-91*I]76 &|S WXXhH;X[bc}zvpzIXQE8׳փ8Nh\%^ }Ϛ;ΔY 95DzhqsY{Z|`KtШs^JLLVJ# k\ʅ+6\`Anjzh/_,׺QAu}';ݛҿqvQ2X挜YEûKf^&\(ʨ&ٮCRQuQwUAiՓMRj#>Zl)ҢJd *ʕLb[} 9;#Fw!{yq^!ЭCpvQ$t[T5Ctp]؞SgrBFj, MӆccJ'2Z3 6΄Vz8@=Kl1k8?,R 7Rb&l>Jx"Vc^6-fcL`cr'L|r'L䜰母8Ujeэm)s0k*N )Q vwm:PRˇi`gꭻoVJ2ir%HjVCʆhʙb5 6R@Pw75Œ[nLfmV9S s2jp՚#++]k hWIal U4 *NsT%r+uU!F!CFQRQW*m#ᴪʠ\'pNHJJp7'7}- jobK|zT?,~M=ED ߱wo?_ƿ?w{}&\o@y?-Aal~JψW;WtӲJwn|n̓@8Do_A5}KTp#=TrR"ZA"110#VZʔDQnJ *]v@wʡI"@*ESG[+%Y+,RIVfϹR[CR@GN4GN4 Bn溢@c֔\Vr+ZJэ%nD++B:u$N`VKҔ7N/ՏL|fc50VJ5J6ձDm@hù֕ tBF9ܜ ZV*ZSJWFvǤm|VOievIvnv6k&٩9Mr{m,]F)P\feW_jSnXN-m<BJa-aU8Oy$L2'7=LRVfLDQLIjryڬ.мq@Pf*TwL%h4COւ*d@'GO-d&Ɏt/Mzx"(k󚮩cFB8rB+ ז44ZhhTRjit*֬TȎϢ#RUu63&ԄKDE46B /%sjVs(L#ްњ.^%4a2pœQ$fj <ʌ)y2ki5mЊAYr T@P㴝*eˆQ̴2Ё7bUo]&s4IL'~*S!6XI#YILjXxQ6U[oF2{0[{WJ\KRX9Gk2@Fh&V:ՕooٞȐ3wÝ0msdl{帇:: [(~zxyx.}x\(%&0ى{s;O!+HD٘pmچ|PKK"m Gx{venqq@f"|?}\Vr SD/%3Ew,Qlf6uiK-C%:0@8D܍S V\M%RX.D,?…ooʥ:I:}F+ҟlO YʴҕCZd*DCNB{8ؠ!| qL,1DK(_$ u$:.czHtB5s9tijXWM^58L>ڡ[C<0aC(5ƒXB!T?o0.1y aÃG<98Yg>S%{jv*%F:eGdANYy.|{!K?o3Z<ݻQuFttdQTT]<ݻW# 'kttv2G{pH4{m'D gXl,;sz͗9ggNM2G嗜^Sc:eΟknEgH)]&#MvZqR.pOjG |B0F7K~=]<Td/ӎqOR<)N6 m2O~xJnВq :]ɳK1g$M1kqN>yv sZg38L+>%L΀0Y8wKlZɳKZ7Ø(,W Pm^MPd$u(Z:w{z]~qfm^!_VjZ>z⺼D#fBO8.DSO]/ZNo=0(&o~C!N/?\Ȅ6L儫e<Nh \A p9\:ꄶ!j@buv8ҖS^P2#Me DNZ֥" ё )9Lv%Tcb(S}#D ؂w⬔7"6I aKԹ f4ۀ~70r2Fo|vI{M{y.?8l^nO\)|Ŝo[5-EΈ]g|!ЬxM`z)#ưNږfEi}y~ VFl1+fDJ%D^7ΑQ9ހ@G[*ZQ8*@F D 1ЧNaCjQEU5 ж?/o:5/Hk:Hcg? 1P)AQj]AEl#DB;l$ߤ> !Ǒ֣8nmH*hf@E)kᬎZ@4l=@d GstĊb$)k#d0q%+9U(K'"*i xUTL 9Gl8y,pcɀ xpF6ܖZ lS6UES+Y76\pa#T<4<4c) ?RZFqc57y;[b%h5< & 3R/L-A0 vH O O{մK"v~XhP;n莲+V;t}?HдnhO۠gMڄ}ꄵcNxPtvu:{>~N:we:v֞N*wظ[䠋* <9ͩ趐Y?ywԭ^9n÷/<%])g$H=2Cs`48%)DtoZѬJ/8!sVs@A ЌBBFs1k>[-lF3B|&Hʪ H0bwUIA$;8=pXOW2H< nN~KD=w:t+/I(6&9O93"5nj(+һ\#HjkS{u(w^F2p>ԓ=^]V_ 3 :ݟGyxyxyxy|Cuy[J*@;+-uk"kVZКȊ4RaW^S땀(uW#{%\GCal>h~94_)|H蜸 LZjGJ_YkZt$uP4ժJ0@pYr3TІ"Nboh ȃN/݇T;ji6RPƵ]nc7}郧# o .}K- s"؜ٜQ?~޽} aɯר;|0Y^&~jl WW!__৴y5 sUNg?-/d~vɣ_nnyO6<pۣ?(h\,/]rq;!fW2ƌ3I{0?>P=ē#OƑ⺮6,6DjGpiXGfbV Ťla>u:֊`L?o/MׂX-0*2(Õ#L++7\ݼ3̿ӘEBbd~b*ts Z PhPT oR6Uw9ƶÆv B1c =d*͆Z1 '+ghNAD@+JBXfk8e?F(w%E=GlV f$4{nN Đ1T[0407`1Ɖ%n>BHSm+c{+fq %|lϧI_EyJl g.Z rϑ . T/pL& t01TJ31o[m{ Q4bijǹ [ uOd"s>ZN4Z>Uzw(k.hbjoD1{EA?D4ua/3N[V/=bA%֤`Z5ZhKf`%I#*ԣ@"\<ɍ6d;ocxedٕ`Hy6xp:}鞶fpE^QD9?hJ |zC ^lUA9iei=dbo9 u;5NqMҬaLUn0?Enc8K>\ +9J=JpwW-XTh1Ӟ3)PXA,W?!--Tכ0h|vuYs?Di^ɘ8 '>$T޿ȕ.=8@TCrprDYx C6ʜgi[ww$ԑrgݍRz;>L(Bݍb4ZY/?ʠ.G.y%Ev}ILW }{ӻ|2|Iȧ|IȧTiLEF>NJ\cYr~{)N+25wg }ԁ|_Vr{OhV &A5o YnPCK2z[c8v_:$/ %&B~Іv-g,sVOfn&-6R(PTu}߽_3j؁nmTӳUOO?0OއgY)'J-$#\j YC2r-=،I)U=u, K+rڲօ^Џ/g3V܂prG"s2T2|V%?u9wK[TOO8!ض\3!R!LÎu2`GW˜r; @w4YX?-A~}YۚdO?z,bx|xlww~/f׷ 5fb8[=3 \]Ig)/~{]U6^~L| pm\E`ۣ?kag!a§)+ẅ G8 k 2)b q~*; AINBwR V'd J'8(m][Qz%Y3RkCc/r?ԁ'-Ix!R-y+ 2,c`%(k-J&t6; &ְŕF/Hqj; [ 7Q8uk1G81\gBe\arOd883uUQ,o@4LלPHjJdizIo da;]\IFVDfȤ5* 85LV ` Wv9, zN^,Q)1#k0ά(1%,0~}tNb< _.ڦ_~Mh_C`4 _|Qze+y㛷o$rf, gNj5' o{kbVrArI-9PJ \H,//gBnH e#JBa"KϭAJ-[ exAdp d XYr:փ]kuFǂ{g|YEg)Mf wx̝)3k1Sv}Q8}[o}g*Lƺ-Uhien7v.KܳD%RqJ@Y( ND jiCSd5gwt7vRf5GSrA(SV"NwWXՖ[j"J^n䍓֊>y0Ѿp*8H[C2sGtwT+4Q}aR6m8eKnI8gR& zKjvțd0@DȾP2KޘM (z%F#&R2Q tJ "mp1I׊;12^;DL"S DL$~E ]#"|Z ]/d*A :5FJ4"֩=2P50SrDh$h)g*^73% | Wͷ;_*9 7LDʙURdM΀dlo:~E-;%*OR2#FWsu \ \$ x|='cFhJKc@3fhՕ(Zm7Cº0&. ws5#Mo{kiҟfzfsg09j)6*;g}Ll7(:z^X~)Ng0d4KM8+hwuz}M^4o9J}v" 3l;o7D@5 롺 i5| x _|vtw5t$~2w>bəj3\waG=}t)eoz2=dH5{j8JE/=Z&Vubrҭrp6w'xEC,sm4Mo3LGSw{g0(19)8ʶ 0}xDQwlvmH-UkaT lxTŭ6z\ vA N02)9ٹn?>l+ˠ2T_='4Ϯsw'kƭߗHTjl? n}'e3]>yIlE-ׅx#X$xLx n܈=$7F Z>;w֝#Ze#eҀ?ƽuɴus Y4%.L%bp;AX띢D2 kRKUiLg8Ӥg42B:c,e$r,;(dhSpyg2ΝVy0u:v٨&WűgLTFn]j5EwICVkk/sɳۘ:g"Miڦhl,y_*PYK5bKS Y⭩aiѠz9J^= D+G>q(Anڛ7HpA>ϴ1qKaRAR.WplAW9EA'೅L:_b!(ԌqQCtR2ʎ 9 n9RHuۻ@G(ZėDH|)U`1Y*>#`MAt (:HY2@!:-8!XD>v`B%۩&OW9_vOHpyuz?,!LW߾{%!"?o|?Ԙ.arF^$s< nnLry>!?ܘϫZW߹[mz> K=UB*g/\ <42 'w4t 5)ݺKQn/~R^'lGmu]$ܙc&O79#I <δtrf|I 4Mvw~n(YS={7"9P(%2Imvi$$"6'ڥU n-#"YLbˮ|_䒓.Rlei;MFP'E Q`5Fzƃ8 4ARxRKwy^?܄nIp}<I&9'3A lB[= xj5, Y0gw+ HFa"@QQ5x1rP0U󨃯Epq$di~4Eo?B餒 ޕ 4Al/E -S\̹vd\YC֒YtłQ keO:i-]+}\_˱gIwMC;Q!OKZh 5;NznV((-zOE5&ͱ38Fv]3mwKLT!&EQ'&25g1H()lt #C$UJO?[n@5ܒKg/TzkWm/_qi68!\ *OWyBOuwyH R DjG_cRI`RdmC":qw}}Bc\&, JwEj\ 8a[-}2鲫kX =3ǚAnG;Y ,t⺚{w?ܙfeV |R?}UrfѴ}m[/>Oߔf/eo30 lϮgtZ2Y?~ _Tr;& QvBi20s>6:]PZ4b+DzuԊRե}ѬkTr{;ͷ$kNfn'hnɲײ'U<بGW!k dA6? ۣ(OWP=+k#(sB3;lVpzFؔ N֪t9jg}%Sˑ!~y`.1~w}vh!'V[Ř>ΒmEA@T>D<記sʤxP/LO14a Kgℰy2(3 pjP[$"6l-(m` +ZnG&a`$UUʠBw"1sKD7daWǴ.Rc"Gm:"E6.L< '҃+C0iI jkK.!W3'Aj;8s-oe`'û@ o5]XȕڢfUVK]p(gSt'7$8mT5e§5NnI:U7keo$H]"\ׯMP)h//ٗs"1iȥ^QB7|7pH#Ӄ!`.n3>n|Ť1tBNPt41 ER=l8QOAr%o>y'o-vwSV5VQt㩮mx}cwꡢy 2dZ*lldlT!rB)WútPLw9:=Ŕ%*DSBQ K}R(TjG:U/41"IL!2cNX%)G ^J=J$a%ZZN<̂XsRSಖJΩ3Fh*:}p֚td鹺K?egz%"j)@x"#C5T#F'@WމjSSF'ts@Ov(HɣE7urkRsZs+({r| ӇҺv)b@>$eyv."ߚ"ӴdUzPaRFKϑJJ=+K򊻙DZ 48m r j %7$1sNPS}SfzNS;Lrk% m}t Ǭdyt3?R~Uw:b Ҿ+Y]A/uY H'SnuL d%$ji(!)ZIW=dȁTm^6~[{m[!ѫҠCQATꎽ)QS7wt![9Pʑ~m-{V( RH:yRݔܠ'0Jŵ\hFN &TZ4hNk .*]@iJN_J(-i)v_DpzzpÝ HK0zi-5VAp&絊R0#"FqZ5OUJѣi`\>3XAјh]g;W9`0T1e~vdADM|1OŤc|1{#-j U;^qgɠ44# !WO_N~KmY kss|PZ}iﯷDjo9+_o]xqf!ƻE'D*(kC}e+ ɬXXǻۙlWEBtEk(**H,(vQ]%*!|'ؤ].ϲ]߮.%܇ӿGp e2f]j<5x2^׬&mAG5Z8vAygNzj]P O5Vb +zHA" < G.

#vFby~?͟G|cTuDSǣ]a-.8)q*0sJ=iIhhmSw; 9*PBޑ9=li p-&^>",VE/FAKFG#=п@);"mB)$qUxJùkpC6mQ{ Ȉ!&z5&l (G5 ,e 蛁q?.!TҞ$B{)2+3c-VǞ+kg*QMOktI&= z&:K)qZmmRh-z(ӯTݬj%{s8 D>ípe(&p/s0/ݷ)ia}v U/E*of$83͈Qs F 6kxڥ[/Wˇ}جOzH1{r^x)(; Q} @JU8$ԛ#j@/^]]lN0U,@r&R)2PYM-TM+s8ec-->SIBu $TM aKTsnaU ;"`@`AATFQ[*uN*溥1 /Syt:Ul@` 輢)Pk}:F( XM\NK8N|pZ]P hzzޫlZ .咉4ۥ{=)vF%lR,d\5 ә@Cf063X v=V֯Wl%Kj yjU,I#-c<1и_T ٿ(_;i.U*"!A)BWWѕd,q%JMUU(DŽҎʐj*d1J*,Ԃ~V%YLk,[ *k6eB0T* \4^I֩] :D;c VKY`,'+nrreL*K 8Qck9e=$7ًci(DXZh!.M:˚f'Jip`iy͐ư6\ig@VimU&/ɶMEd Dk4([h) pAebd梂L l8ٔR"ؗZ 72B1isI+EUbݮ}(kRE|( _ 4̜CF3ũffw.PsY I;3ZɀmESLQ.~'zSveǚ_n/ņ-c C8ֽɯ}JƷl{Og>2QQ1#kKN9F^\o%pWHy4ɂfc?/p|yA%+ _ JLYt]{uJWx.CNSRÊPZi ͬ-8j9Q;-1~>W%xP2o uUyՀ!Zz4^\Wtvqk4lERY)F]"SQt=Ro_kV v(r :Nq|4Kragz >͍s\>3PuU1Idj`d-"![+EP)[)w _/ɈT)2 /#!]_bwՑ]ukkt#; GO4nu~$Je޵㩖 EzϪKsdi/>(֐D9iB4ߒx#bUuD%"9Z '֔l[b |ᘫee k{xzl-d=g[$S=?i%Q~C;vT>9T#qJOesI1`q) ,`2-6J9S*~,+ D;1Yuw/"? 1.!l_k)-Jk\PBsS6 ũ0YWhjX$J2j @ķXڒбx >#L]Bp(1/tRJ%Vk9c /T"J[ױiħUsy1WBȕ(~b#>$j?J$*+'TZcp})ÚbX(,Lme,vI Q T2VI5m]Ȝ;Z!t̹k&2ciKEU5QtT]Tgs4UK%ABNOv[[߅Jט>9H? Twԧtny8ޟ9mn7s5 7;gf2jo_ c\#cdK+w(kKUwɴf"1JOVlgl&杞,51{s_ & dڵ[= vuk0m:mv: /ݵ8>o«0{o&Kd{<PJ?oŸ>'[4sKٙɌFezo&3260mk`>i_I.21Y:0SSSF,qu@a9L.(JNl)i9waÎBYWHA.seFزBS c@'}L)`?%HSt1%磺Dw,ɝy_xRD>P@ڌnd~#h*45Xjak+\yAs^XBtext#ҲTYC ͍HdsYXervVy eE,ZE}/pf}1A2,~ ;;|0gY`v1uy$pnt{nÏ!sұ2Lc`9q{DM{(o}xOF5ULtd|G/c؟>]*Z+NGVZݎ#+8Bxs.[Ĕ ?2uO(;eRLŠ"3CM7V/h{*?ҀH#R_I7^M/ i$#.2R_! ;nmUX =#+z me??AJhx%TbTY V(n~B|";]^\%"ؙ# h|n^g >y' ԍ8V 87iQ@oϐ+H^C)P[_򨇷WJQa$:gt LFC9U8JⴣҎFuNwl: )ELtglQ43 UQԥ5[QT[ؾ"iY:Tuymd̈́T3$%遡8PT[P+\>5#:RZsL$2/#x'9L|vqOM/Ux/]lj3(2E:dH*އՇ׼*^`kX"d$ g*5'TdE.+p |_gӦZgELOƤEGwSru\m (߫=pDZճuSM9b*DqZܕ_?]uD@< W[rj"%ߘmv>Mj%8}~2G2 z"nf4RXV۟7) ,ɴ7"bw'ҠLP'Ъ9ln`\i{&=3Q7 X9ސsMcc_wڍ=jT bD'u*mw'"[E4I*idRZ-fbڱD$h?>]ߙ{e50Z Zo|gpe*b; zrt%Aq_-ϔ|i3߹^ m`Ezo$^K700w Ib/3ѿAf4T$-B$e?3 MzƠ%y: Η? 9C9%cio:IW/TjzqM{%iA= Y%]ܿRAXTqk yG&0㓿WtB* - sөKc!R3/'GImAkM_/w6 |3({"(Ilղ=Tη^YT[>zgEoIxb|AT bScT /l$BAJjE4IvS"=$Ivۧ@ mޑ5R5!!\DdJYJ 3̞LF $55t7,ba.$Y FwxTw-|/YPz-C6GCN,meH7;!F oj޶(WmX.8R&anW1zqrjkUjlW$5kPE[M]#Hjgg%;OHkz4B; hmB#.(;^O}&SE`"fp<`ng&HH)"s'mwKFH#~*e6HѾ4ƌﺍ"ߴۊgxtJD}O?ק7 7j2.QRR.J|'@:h{s;|SH&HJԾRTXIE '5#ZRVO(%+h\pNNJTm&aKv KaWؒ;tJ9Eq@<4'rA/_oxU/q }m#pō5}}Cz8zߤLX8x?ޔ|qPjG8c]}5Qv6GQcL1f&pym1oh`qIz {cJiooӺ +ѷx6UBɣ{Zh$.E_o[0ʘ{"{Mtxk@iÈ9 ۉ̹:z\V28эe*A~gkհt{/2v쾌&S7F|\&˷>IwpG鵼&j87?p.U yH&dO(rUVX-qIn<{qMni:6BV˜ TUpg"S`\‚ h,2RHQ&PrW0R 9R܀βYsdEaRaci `CALF\V.GrJLɌ;d.]^DXn tnL0GtRe¼d95Mr2 ct;R7AoVz@}}t H?p OB_ % $=$x`fW?Ƹ{gCDaҟ7'`'9no 8agy'έLݏ&7y#n `X~{pwg>UJ4x23Q~Ǘ/Ո){ᰪ *Ja)[pm\™FrToyJh`Z{(,* /wc5y4wp^+aWN%VhBő}`G#+zŠ{tp6N^V頱LhsFVlr6ܐ8ٽ8yqvz[|%P%B5GR-ņ FD^R ?ZUX!5>Z3`?}YVРњišp}JaEh#rSz^wS zMlOnt;]_4v''7FS|=WRܯScoa`$G2Z:'wvn4m u+,>PUF#ӊ"͹P"1 ZM6Ӏj cӅIcV2%~zt=*=OSnGpnQ֦r.LqOF|rMY*# y^TzGZ߅z W"߇+Vϱ[3'u緤ھq^g_6u(BXt{Q70ǃa.lh}#88 2 !jg_W ]|#yИRkד CL2lKIW-s z;o0ypU-۹(cŋZdG6R˴Fj#DZ,f3\_$3ʋ@dFY7 ،iTD@Tm| ˹ZjKr&ǥmu@i;@w;2 B'mr+3zk 1Ib LXS{Y'oo9d]->KmM:f8Tar^GĽA^Q)""@KHV#ň+L x\SGh3?>~+N5H-~5"@(+޸|yVbe!ţ+`DnZ:{/unnxm;M)/r֐٧ZwOhvOGw'/l8|J>s@|~5}x=DzGIuUt="u"8ԬFx}|ZǞ2؃8n-z!Yzc^DoNZIdݴfUπQwq%*,$Ⲳ*/ ąZ-sf.N :~mCXOc~B#K 8LF4j accbGq6Fz9tBލ xXjpW3 lBlB^x.ZX bӂ{X8$ /h \y:i,K !'c*mX[^5qM_<`hGuY^(}۵Ș%:clKx'oӽi-/rw7O\/}J峋/w)? $iViT%ę2>gB({͆-ŌR‰Ĕb "IeBL Qg +ۜw"&T!?]h3UaHBV)0zSsN ʔUM56k[UiEj'aFF+;(+. tUgAQ$U U ڴW9gcg~7|k>vΣ(I~x4us55oih]Q?ihgiTЧj >tP*y鹫B7zWE~v][!)@U/9>_+Jўr[!.zocken˞p'sz yǷ_ #X A~/s>Aɾ/+LmP!; 8|| )BHj^_-&YMd7u ׼JyqOA1TH,K o5*y-}}Lj:rJE N(r|v'A 40AR&$=7 fzv<,͂MR,KrN,z2)0p,cZzh$\T,Qo1M6њ1S^Ppm @ Q@=*6}ɒ+im -2~7/ޞՇgmU|U(0YT`>HIɅ&Zߝf&sp1Dv[v2&@5F"F[*0 N l.Xn1G`` @%y%l`\Y*V"jf(t,E=Cd.Uj?$]!BUj5sA(I:D}7Uш^LfL)D-_whMP8w|Xk6i@fR,J(ךE:g+c͠d'/^n^ε7.e0PaSnD)s0ѣ CtF7!ӧ8i^mo]H[&o^Jy%q<Ǘ}ofOqJDLP( e+wfj;|v!N`4>ǿz yt_\+@+g6& R: ^VTRň<F2j_Žw^ 7?}Xj9Z cY[}_q~ٽ|јj\Bb Y'gqdHCT;KPj 7lL:KlOIq"Aq]{"z.8Nx%rcVoI{\[oO{y˪-_7vsRo uV+@7i3 Cƈ4& Hf9x7^tt^)d||xPu9J84I\ s!%=4(AoI$E ]:ET9S?8؎4ڈgi\MJ) xqFa‡e[jj<_Ag:?NS;_<极Bs-;| 8z7 'nN3x/L2 yz.,;7ms鹥k>r΂sk9K}ӼwzԶauEn)VJxonSw 2P2>Ad.A_u3 DEsq)c2\yyqk>̲kŹbtyFӢd=noi5֎iq͕'PPZ[CI1H.\i[?0#VCN(Kku v[ :cap(ySAq} boçQ(l#$Nxw^e+ UU)TFcQ1KEpߏ;4h7xKW_:`0 TǗBF9CWe2浾GZ)d! JBe3aO+OزևLO 0}F `Tp䯟_N]T:/B:v42I]Ԇ/S_nP(l祫d*_t7!TeL؜pĝC Y+Ɛ:Fg+o^v\t%ݴr!iTOqj5e~U[dqN:d'X/t/ I>o3Ryol؀ZY vW:yсvmPt>=MeѲeTBi<"gv,΢fӧ`Z15n/I[A?L iю~cT4cp7Kan!p…$8XYLM n->{VˠG\b* { ',D<0$ 1 ZqJ@CjA* M?)`V"#|4XZ GAX,HbB\a>P@PmX"oAT(UX}!$ )4GHm,Lj*p[6;)%j@J#sK]@2TTT3%ujZ#һ/~,0bp+ 9ES u & Κ8kƨ{_|@V_v%Tφ@M]w˲I"N݈xIo;)V hsZ.c Aw]\Er<]SV*9".Yxێ/Uׇ,3սh @mJe *\PNSBm]Qr>uզ!~ATXؽ ."$k^@F~d>/2F UeYlcA$TݙYpMhhxaZ+ }aU ;\]]SF&kL֫V( 7UxJҵ3[,E(-m]VFlpҕݠsXE^QRHX ֥‹sIUDEI_Q@ʓn~FImTHޅMq{@mm՞[dV%!VLuqǕ$NW:>S"{ì&>/=ڷӑY"fv|{~4VdJX0o*u/ J'07{I#R$\Tڟo_)u30kx܌_~Y?ynI~aaaa~$*EDJqTbaγ`EhŌ5Q0)ϻtA#U,V/MŕӨk8FG0Q3Q_u3Xͯlȯ[4\Xe ( *jA *ʽ 04(f9A)Nhȭ*r9_rU5rqp!"2m*,k ^~ $-Mlv.Rshb}\G[AFJ{$EfG=+.LTE+hm}YC3^:c"_lh` R03`6Np,*6gz> ~fR$3"6|x_9]noEִh_(tz(q@"d}GԸv%W{ 2̥ՎM^P: u'*8`d^XIYYm މ/DҜvm"D]N̈7m쮑KLw[ՐS t_pѩFa:T.l24J Yl2#  HH֕N㚽ʩj  FNJlZ| \NՊ&!&C=bbbbX6rO0:Eb0!ELs4P B48#Q{URO`rEy #TO`O #(˴g<  # vQ*>@ QvKm*j9kN_VW 8%yUr,s# :L|Q~LłQӜrjV+f_G3W$/2 K{5-hvo@ y7XG7!g31ǚT } R>񸔯͍ [sdXo{"{SUŝĆP`[:bAa:smnE>>p \Voˊ@t$ۓlO1vʑ2X3vx,uývNl\3G3v}ŨWnޏn1܏!>YѓdѓiKz=|[ֳ\^oySKv̘`=:)[ ZU:bf+QO?oݨEEfK4CuOllӭY]t.QQ=FS?ݺ,)eRErh{ȋΐ]J;d[Y*?Ӻ,>[.3Jً[}3@|Y<'0"v,FXRϣ_J]T"cTaQ{Hc ܀H2QQgȀVJa !xŖjRYUF8sHfG7?k6dߛir IU9qY~O''G㼘6;), RiDp.ND㰀$"EKKo1S"XZA [{7oZ+aXj-~77ӸOYa5#hiի݌W[-h$84M" zᝏ~a2])@DiS6Ū^@D=҉7Q!T*2{N;sTjQeg:fo űEN EऎW`z)a  CE1:''Qt,8㏃I9YW_mbԓG[Ef4!B' ϓS%Wo([:-Wn6[z  @{-?P!IysS7ÈQX6<Vٓ4'#`Lxh߮ i#N3s'Qr}zfQj ̺|Jݨzugm&3~&7/ZV?o\_]כN&TNWV) 9ڗMNݪb:U8Ļ.8-l%kh&;+37*:J.1W@Wr{v?Q#&?z5r qhp]!9Ņ vjy]0%kjj#NAd1n)=J3!^](b, A[F}vk`si'9#pRצ@DĢ^N<@e,~j:?S˅@aʲjg-hN-R3[6vLp: n?L3[js|,Ƕ…{5¡,ZE1Lfx&Ng#>1 ˲ ޘ mFJm誟3MG*`ߕGSqg/xC_јg`n9kK~w)u$s+U;pFۈ:Ǭ%ki6e 1t46U_ 6Du&X;j"`ojQ {oc%wu~QɴR .O*AkjSdcq0ecQ_ٝ2{-_뚢1gu= 82cnb@˞dt-Q /XsJ?CVSȋfJ pUq'L^+6tA 9;raPBPPHlq8?Q8i. 'QY,rsdFJfbXQ_DhR}y1{1&I d =Gٟ\r0)2Gɘ"3<T rsj-`,sTH",O ii)tȅ #/ Fk)Q0E@(FX%1UpN'iJKz1h.( ɸʌ3[NWцiL~0'> Da^< 똟3̯O- w߿Y|T9 yŌbLғƮO  ͖z v^]`ٗ@.HC\ȗ>=93}>< ġA^R[V THCYGu|YFc%:SV睖:7̻E}G1Lcڥ-+Q,+x^ܬ"4 kFxu" ,q_&off=ܔg){Qz|%7_y}茔ÏG{F^D`?bG5~GFXBX/}]h8:nsBR\t- _I{dp(RJ^Br3`%K+yIAW'0S 3k"Tػ"3jfRYrCrdZ Ad.Gu2TlP]]<s4$h21]mV7R"kL,NX_ :6d$Α4}D[j>4[@5HMMA!w&7 Qd VV̈tqi[7o]^*UՇWDR|AuհzT7WLrʲ9WmN32{Gj n%Rl{`'H!\ڭE4"E2+z"8 7xLe &w BzO@\KǕUjQMC4ڒtۥl#fÆyexȱ@Hl*PTæyn'F_nL`j6u jiDd}z#Ѭ[w'@ciIv\)ؑf2fݜ˿%!\Pu!I;sLC}t *H]t%m wu)+Tt}|e<㔭TG>IIDLCசSSjQZNJ綑n0vkV;yQFS4T\.D[qʐWVnp'gƯG{›sg> ͸Sç-xuo/fWgxǞho#=q'sD]|r\{b𭜪sagBA894 D2u]|FeJUGJgRvSPJMN)wrvucwεŃw];>59Sw|ҡi˰Wk:WG{;[xdi^8 :dhw 75h;{Ũ(9fzOBd wPpJY@dpPL*=brLC@ucA{ct9A_N&xʏbs6}0 芰 ϗ*'_"-$(L+\.$cj5".Dž33!ncT#~ʝ\B5皣 t<*3.#JiPs ǃVh"%s 6L ? Dj(hB$rlHzo)J7?($(y5 @E5QbF&?sQb5Q\A7ZckDurT/}8HN>CBҍrɈ2:co4~+^0{:"ˏf|9G_ gs9vK9F,Nv||d&}Eyukd9a9FB`a# B0N@u\_5Y`yu8`/?*Bܵa͇K5ٵ9]i/Ө_} i/8 M-ݝ:.4bR4#_k1_]Q~ PhgOh9?@Bx3Nr}b5|Q@"ɃN̺(kn~8+q:Qo"LZ"&a-6DDk R58E;;1*" #QwQDڌ=Jӡ92^:GuW絿C]5s٭':F/4{_v_6a&LvX4iRPҀgIՉa^wjlD ͘a~m\z5۠c)$WmG[ߖ$t?IdKO/&m=u:n>(8>aAӜp%ݗr5l=(Eb9ܔ>Wv=)y49@ LT'Ќvϒ[ua$\270 ;M2FPAs5r\=ROΧ6#Ni<= խfO5QS-@-l8-0[w{91y?hnp {;~/x7 a4蕄Ep,R1)kkc"Y~ᅮ͋c0 Q %Rse: E)-d%l1k?6f8A\E!njo֩Ռ.CXܻel-z73 n(װ2V$o%AD6tD& x4?߮6pL!5' 鬨dTv鬨G֊46T l}2t RG\HWb)=@hY L(_ޢ@&\ ]YKƼ79+EGՏ7GI[p$$HDh$>Mx T:;7B{R&@,*#*w~0 Q2u/.+73kvEVYo|q-APa|()]etJ bJ;*pZ.U$o\}cG^\=^9DdK1#XâIVl4;hꩆR9O/3D2=R9d R*G bn.I\/$5LFiT` NB>~゠`d!$}&}HT),3-<"enGm *]ID l=R.NaH;}_>rz0jw:Vnv\#(D+ĶҼCzGw/Qz-eiAb&gau(EjUy^g0W.8PXRf7ءV SZqZQy@t,վ%'';Ka+!N'FQw/hŒ'RSvMN+JܞZ{ /@g4>$@\ 8+Fy56l=ʂR" Xp-j]ד 0?ގxjΆ P#)ge=YIC6S'?K_ޒ9| :?##JLMNX@zAU~yVD*y>ZǐxDs%:׺=$lӔ\6,#dԁ1eJǼ$Q2 Go泛r.Er >Yue TP&gXd /BNz*="=TF+ S1+ج B`ZhQQyHyp:|pΚpV:CJ>U96Oᮌ_}D!7['c1k.wO2|qg PЖϷVJA䍸hHɯ d &%fk:2 g"Ɣ-IL+ v-l!ƴpTF!,ns&0,3_I$啦\f-x>5!,͇AM[e@*s8|tK(*JQ +2+% lic1dS)lxYa!Oށ$m/0$0E6J8p'9ZYV$2fOlBrCr\ h6䑓c=[R}c?g=ъ1̏@2Fg;FK T3sN^ۯš_uzBj4f}^'!b 1quQ=l^^ZO2I^Ea{UWua,.a *KK#4)pee-E[\Q%A!2'?`¥rؑ2^]2ɲj۸ӾY{{^_z,h@uv[u["x_*4ښXe6J<j,۶Ei5Jn.R`%ƾKoFH,>b*TMBc<\^RȪ5uh[[" @2Bs[O.k^ iR챌%+YFIiu tc-P<ZK A)cUuVW"'+YF,M୯FS>7ichd0"Gx|syc|5ʽ>1Bt%ZUzeA4keYP:ᶂVK;P֊kI@ u +hH.w3LQ>]'gM~g؅pb^}k}{%qw} G:FcKF1_g*SF`pn7imW>ksUr?A-p4MI1(:zWոAz1JB^EG6!-pmf?կz8L0ngMjLV[/@@~~_d`4bkA{P^#RҲ(]A-2ETE- Jy)4P єa_ b1ښh>ȄWa/ J!ѧ2OSKH;N)Fe߀,b]Hֿc),e8`|.kR-H,y& 6L}׉)8 z@M'NIIYC)yPJR&AD9RW9'$ ӶȀ̋@HRW$K ptߡ_?h5ڷ/7+ o@ J*S,Uת,4?S 7"ϝ'eeaa8~r-hW"2-#?٧;"&G?],н#2QI5Z]p#>a5j}(ҢrXF*2 5Ien-7gRm@ ۂXPQz[6fN"f+Y3RO):Ԋ>ˢ&q#ߌE7iDoө0r1,mTXߟZjM&5״TQkgC'L 8נcFޒp6M;~v? 1bCm=-awQ2-!Mq/NMKx$c.uY6D+/;#hx*Yxf*5 _uy}sː!=x^mnI/!ُ54* 0 [FDUv4 0M=>sܝN6YK}XUyo;Z _/G]Rjp3t<3>_w <.߽ʀ0Z+cߗ㻛/cmuOLdc--a&?5>$Vӕv*2O9ph][ͱvhDQ r1Ԗy2oa57nﴈNPu۞A:CCt#vYΰ.mOB5щFrn _ ]_1(CsU1f6+D?c% dʫ%*7cζIM{ )rD8Ga6y$<~J@T"X Z-k2}<+d8-a9loM3LJS,!< FӘ%H$Qy6a0CD"Bkb}a?}t"Ifړl%Êq[(q7:] M9y:bdHM!Ϸg0xC7XB'ޑX9z۳ {N %u|;ӹ"Cı]žS! ajzjk&Y+r X+ި:'Fk*# t)Xf.]NflP:dרA9:ϭ*NUkq Y^U+`'=3UnԝE!j]B=/[l:._UfWQ_E~U]/G(dѨЄRj@tE) `@ʫTm$,)V001燅X,p}[}}EoI e*❓Ǫf.{|g)8N&o.gFja$-H_o6I30zmuTDg*tʖsg sԹcWZhEt]C[O9 u-g@9儧RUIpM늍E^U'P4ZXmtZ@SCTiZ:/Ŷ~:)dw{lNb_f1V8'z  S+ Hܭ/0(o)Љ-.d Qoq=9㮢3*:㮢3[\PUom7JlPBx(58ЂX%&[\}_򝅋^whJѨQY=H㪾K軾]wJ&B'YĖ'm"BRDO ,p+TlFJ;6HRҸdL_O Z iUhVn( eCY|Jܣ*JQUa *@ VVڸ^V ת]^Tk2lͭbted#rP˰ɸl6Wm9R;-*2FovZz* 5N%"6Nj ",v봸\ xEB}c?֢k9+jkmE3f#!7[{FkiA\,8'\< H)jZ$P)tu=8fl+֗[b+SnҩBXe}ojQ{Lݠ}wҭm}Dر-&.pdMJr,2+k>K}yf7ߙnZP8M6t{lHr {c΋B 0flTd^S`)gz9_!45֭5 ό_ !+)^}#I6D:Yb**"2qsq{Z(Rp*>9lݧyzQx<})pH>!z0`(4>s5Q?jf"!{!­ؐ) *:txT綐zT QƩ#Dan{j!XB`,sh)ʇ[B8x eK$GK?D<]O5NϞg^P >xx/;8/fw.%NjcEχ8z$ -6{:3֙ys+-@U HE>\~$Hg_:[F]8z!|g7  sFA%gVhozyHҽ2yУ0DΖ8zy-٫xD8X R28YYK4\Xͅ!덴(Xe e#jL:tUU^Z)UB7]w5~u`h[c38fHBqufΌx̸$8jon/= z˗<լ0T V`9Ԇխ(DehB XBeA2x9' ZQEZ;et`8!]T~;@SD[!AM,&ï^\^X.vCS^t8h7OxWi$dM& Ƀ@cǥ7=7J9ʻX!Lý<뭥<1KAKړz}.øu{,q<&f{^LΞgVD*, цNѩ;y(6*iqnBF 讻w}bv s!G–g$qϲXͦ:k[SH%9q[3oQqy!y9"b`.pg5f(ेL2fCQIlC[fN)8~tm,HEJϞg_Ps@gF1 z\ڷ$˄48Qb<ܨڄM*YONG}M54*עy1~-DMH>qQ7ۛJd@A|;؟+]a2%!D|>9Ihț_')Fi÷crxLVif^r:ĉ^p.bƃ0, q6焧FI,>АiGxuz!}H=^jܣ{N>81"4|ےxLyīN'U8̽[KOa#xd6pI$[s"52I5v8dVd"eZK%ƽ5ǁ[ Z21KISxDN0WI`aF,%O7fbỲ'IJĀ__ R mxɎӗ;\%UI\|&/~Og><:=bƳb-Oѩ \D nν/g<ҿ3]s @@ _<\D ՟O ƞT' gu9Lk !;zk^q6}UP86`O3?OK|G a~ukw}ht2ލNjwA፞&&QKXK'5gۓ _º҂x\O'ȹ6'{-囹i *x]W:} {i+܆:+կǺiNT8t rN*5.nVP oK&ڋ{]Sbv0I<- r'ڏ9Nc8 ux3{?<|P~ՓJH;\щU.f4a2&b' du.N\|;X?eD!:/ci&_aԜ>nN[>}ggHJ?c!Mu)^}ZjkNx~&㐊oRDlQjDWWnvqjmLcK]EKB\IAȜcyC~BQhV´35s<0۩W4mDw+AmS'$%)I֥":IŢmi1uD 6# h 7* ? O*Է <q6kI!0HyLSIJ!3`ֽpbyS0+Q_̗>|7344Fs0˽6loV-:KE3ZW1p4$V! <`3ૂgk2: amX2 :^Ŭp$)i%5qֶCvf .,8G381"XiRR"nM;e$ ;i-"xe=Î0L/.3# J" a[CWH-=4s6Jya򙠁 %4@=s!dc` F; jlQ(D_tɒ*us@FAx LPb*7²p9veTC# C@"0!!Y$ݣ`VO8dt9g) .2wbkөjy$u3ڛMzJIdlwǗIsE/e֙; ! { _CN'hyNo 婦\=K̷yuAd2gP'$w|R!<^j!tn LU͚33 B6FH)ftb$FFQ+8FRb*g6H*bJc\6ѿVz_s:tNeCy.O0Sh饕h3!Q*Et@:YXJoG QЈ!Es8kN߬>VKCJwL@-!ǔh}g_#,..~]}]cRٕ^^>˧Hz;Ty5J4qB f-}vB)]JaY'bT95G뇨~ED-:]XUPXBrjp #%bl% 1 f,H/<& 2[! TJ*C P}|Pj`]V)st+ SρBt0q09@´_${)ń)7s:fNN+%m-Vtշz֐?v)KGJbgtAF8htp|Dݴr1n (w <, -5 #J ~.ni-hO]]ݺ,wQ o!bZ[岨22i S.$*w`󋲴sJ^[n2N( wkQր#lG#Dv+ةC-;'bƫEEO.- FSt4)$S.mK Qv$VkH0/1bV4jWV22G*L}&yˆ>v  #'IFy 5NeqA+&qM| 0!]4AJ'`(k A!덴(Xe e#j+BHnZP.z[FV(dR qDiqVQfB0Fs))w ΉAHsդ-0]GAucEh +XL42%:Mm2d ƫEH&CR R3@Z @ zyʍ_>^>qˆB㖝k<>}ѝ]};Og?_f0fnO'~yKF} tf~M]v<}q;,M,o_|x$A%AXj3ô~Euha>/G2Dk#29uO?=W \|[=þ_XcHuM25?d<ءVVLJOz=PVϰsL`F8!μg t/Nx Pٗbk's5_W`?mI+F7+7%=.$e R=׏dߏݶOJd;qN2UU_}'9zwY/==< ]E S2ۻ}|O2;Nݯ~-ǧHaVTֽΟf-]Gy*̍qo7_8{cB_MǣWƍۑUk2Kןp?`Z7Xqz829] V݌=eK!VC8 o3a{ᐙ$6nVS'kABm""'>Y0?8fcDVKڼQ[C 'bkfhLQ[%l&^l)L6[Cx*)@:%Qn*97\Irg4']fTk݌"K((1 2 ٭|H (ϥB*5CBJ͐Jea V]H?3lIzCU1ږ#~|UMƻq|VꇂຸB P кUB!b՚Fjw@RZ B,9グ.!l.qq*Vw%|kpirQ^gB۪J_DL=[>~6K"mBmBVfS̍=`yFTBBZdA1 <J TVJҊXv:+)@fys})Cd>Uv|TP1iOTAnפVL4O8MR@%– %BZv,!}%[" ~I~|fGdxSjd;=rND`*T#No%8A{/Hl/w*L,_{`3o<^޺{<>>)AE)(ܤˬ'  EoB&+FڤK,P:!CWP76BEi$]ɾlOLk)( J )uYu'K0A5:c7VZ֟_qԠ,XiATRpJ~3ALFV'Hx{ңw>@)[,o||ǝҭԬ GQ1gh񛥣{'sIl[8qN> '_l0zEsN1s7nr(u\'k?gm\)k-EZ;VOSy)< -&R݁`2QNħVrrLE*-rQTo\, !"4'!)LeL@x3eሽLJ 2ޤXc~1!#M~6{,QI<TSG&eb ijsc 6Hp!qn]"0+mαJS(&BYJj<#RYCAHI冃ӛ /V3WsJ}1ژ^H)eMRv:TK(I$O#b+O9QԹa-3~fnPέ$ K?ZVK(X[vQl_wU|`]9T鮓; L @IتMoZ3VHqתMn:UCHX4@5P$1n&1(TDL%T$!D<32-g {%oL}gX6H.e{rjWy e#^ZAkrZd:ue>M~[w+Y_[K..+!.!3ɽBɅ)Oxr4XS&z!MlVl\qXl2P\R=o5Yzt.e_Wio,!]4Ƨ%G?G71DXN;DU\C[Y`tkCs]@}DQA{;/?haے*D T^wazMj.|I߅@RS(ԑ}^|xgK*PoDyFn# 2䖋O҈*PGqZEs06v w?#% RΈd:vp]*]ġmLhcP Py,P  CD4}о@$9MbL>Tl )ADO{Ηф_{>zտ)lcdr.2$ɞ}om)eXQ-l)p>׫R+ʎ]#!®qc*c׈k4!߹8Ijt.! ubQ%n0w/anMxwQ>UJb>Zt1DXN;D.Lvݢ nMxw})Eı7>)g*+#KLj%?&#Nlj-)q2c2X duJ(Qb, C8(M@U9&VmL+HKAߊ))q"Lrc2 K $4kb0k7%nU)1vM{D\= QcT7Amyi%2s m}HK?HH9:B-PWn9UA+%Iˤ("PeM } WZv\ɟIi6B+(S# [3fT(MijvC6TTd컩䏶ږV^Cؿ +HP~Z@#Zﳞws׵ᑸD"~i^~~51ӑn q>n| SbՄfLL471 K@h ;Ymk%zٛ]^ľ7?v:xqή 藯xGE CD8qDB1ur^!3 F9D0F1Pb$%8FXjf5;f%M܌KcQ#2Zq6+ESwH`N7kg9e>nU0eeYCΆج/}(p{kbMC3Ul_aEmP%ejUW0UgT #'cr]J)J*TbE#[y.-&sz[EܘǠ;8gftn)*ru;vR~tܾ_ 89#] *X92F MU2jΥq1Xy2E9u ӔaK!`i)mb @ )fo፿t1rK>ͳoMQ3@?˦uy-Fy\a~s x5~(T*1.xCW%:A$)9ܟ݈0's8M[ 6}ǟ$QDo~nVntSP@{=4O;ړ}^I胤\y[k+=O qzb_̲57l ]ܒ<2͓=FVW5c:si`k_]'U\"VGyoY=}~r+!ZnL SîfS*/~>pfBxz?#d8\Gax rDxd>qqb0)U#+?WXbHz2O3⃝xA"\d\dY|c5l޳m$W9~?p%H|m nڊ5DgI=(C1#wWWW JQ4ybP9Y\1uww 宅z/DP;uKHP}-<R 6ܲpSkq?)I{RBVELa) _6SLptduNN ;!mHLtt Y34SHꦷsWU#Lh,;qtpT&Z|)ӈ^U;WȉDWrqܡQeMQu+kںz|,I۝#WJbDoZ%^BK7M=ҭ.1Џ9c){h"!"IP1]Ha"DI8>X咂F 쎺4Lq{ro2ЭKLYFEXHApOnVj3mXn-PV^D;QۊVcRroڊDYEdezڕP*Zq'ՖUü؛.9RH$ Q#݂o.$^/ Ĉr+_iMX^\HBY!k^z|E l]VZ8O#Uu?tz&;ŎZ9-]<.N>,Z`\evCEA)g]JdTnXK.b,I"mE<ˀ0h5$!`Bf=k@H!! R_DSlzb&%_Rմ-Qe@3hS.$ BJ; +,osf$ (v.IELʔ*űHTT[MT )Z^5e$=J&Fhk$NAFL"MK4'XG1SjJZ3LJ8 :²hL(E,SZ&/<j.fsн#Cϐ!.6iµoXP($J'gh̬LALbIH$fгF5.50ŷҐ$ZNAV hc(83a]pX Rp4u -Qb:,FeR=͊xDy@ cp6_d9["vzgwzgmYX `fi:˜"YcdⰣB ɵR-F< K(>_~5?etx@yR:؝j= ~Z bYeE6Pa,idkO81pOц8һ!ĉ Qu1y12+A)>{HШ*Y&59.uOE) (NMs"M:Hxhq*;~alu\Pn8gs* &tC8 ȝ`86Ka)%L0&H]2RRߴd-t%ͻ  |D$bBdo8bb(J%RqT"f8Hd '++ia vϳՁM"TKşȁ`>pf?RPi/O\jO-Tcsq/yyм6fb(`?ȋ7XLcR*$>2Rnh',5;ĈqG8αC@1WPrM%M0fim#aa+2eآRK~~IX/VXxJ\LK\M-W9)pOmΖ ͌&+(+Ioqh̓$l)ՂfQ%8p85|VbA%J(u9JRn* 51H;m`R]eWs`,T0))bΙТ@qdK*5/IUYp޲AcA5hލ!e}jNWXl z vD&ҀŹ$utKhN`3D'e1J`ec6AFj(/}3eJ ᘋXtݕ M7&1b -Glp[%e%"!^+l%C༈PɊ5">P;D3*؎WV-TtRtH`H,h#X߾W'Ox[A-'~vep0kz+Arfk"JD;T$=}?wτs͔N@D;FI}0Q4}Ibl"㝀kb$r tNbZ sw~Vgr1~)L9L;|!)5P\.Xh{#\V|Y/\ޯEXJrh}8r^=8>( vKq0% \V OvMz:?Tܚp "F,4QBKH3֬B߻IorSbMV9aA~{tƎٓf_G{ۑekg ;^i?qL| &0`<WkOrMy7;d|^|73lk>?|/x6?VF`-WN miZst9+qxcOzlFI tadg/erMsE=W24Bp:{w*qaILz>Pf ,r:pĄSnVMݷ6?xe.Mru8 ڛ|GcteiZ}*`4L:79n\ LF|`qN )?w$y?;:d@|:S_l6=q4ƃ{,lWb8"sգp#Liq0g2Y1gA;{ʤpO$,bՠ,6ڒcn|6/dbh b+5~,kXR."ͅNM>PWj1m٧tH_)̀Fx+vsy3w)R'QD$  J&%ءX1bcM2@)Jil b97?znŁَ /\sr^{\ҝZ9F%yRJI'K1# o +&Zַ4+ҽbF3\Go'1~ 3. =vU*S񛱋}#`:{zװzq`GŠ|봍Gjj_"qD5/MՊ2@ $VVRQ.}pal#+Zӂ {c+˯֚Z3Eƻn13|'( 쿁ԁr3^}yV -?6T j#c)'̰XSНTx\D9miTB[h2Q_ErC^pyvW"ť0 'W̥_Tx`<>Gx sɿ7jnv~ tQ;ldž8 2lҰ:7s'BfopqRs9'em]t{W{Hbf0 0u5L)ݖ!cGRr+ѮB')Ѳ+ hB)ԫshjoDf-!1y5=\u㖸T&SBіhpzhM&ZB43MI%rncFc)>DH;FѤJ ʐ قlw3* ]S_wm9_anjw~U9%T~ɝ6zxIʉw+4|I=!)r9v,?@ nNYk+;a{v'71/ⴢ]hR%v+]UD#IH4*@Jm;%;,ݳzZ5x?}3+:s1 2v:tG-AȇoA4D쟊qY|ILn HmmntƪON =ȽC%=wᣯ7"z*< xHT?Oǵ/q΁OV")^ `6.ѕb$!Q&. STE"""^]x&|*U&_k!k(? Wl* 7O} :z_r34ߛr?o$FS,WW/NZ#Źo+A_q蹍͓]52{fۼ%O3o黳_gWhҧhisY&4×d9u9u}zR{e&"qZZ.?0 Qx@rQ%4=׆W.yxB7b;-ٔ"'"(QD"SF8BIXL)ji.y ^P B-DO^"B9v/ݯ~?p_thv>^/X(ٷr| O'\ UBJAQ.Iaź>zkAT*ɬqJX ea r#ӻ-n H3`oHw$. }3yy{'2В[/鲬5Eofz3]2}J=FE/iEp|-XBq~ wr-/n$!9%?p$G.nу 4 Hz4o0]f:FCw^;L,'.I.pԘ6 A8bP2 ,'\IAZe7s+|6lIM#*Oy\5~K&D#gχ+^яˈp:d@s??>_~|=D|sqv?U{@I ;e; x~3hr…0B6\I)bFD21ۡo_z㦗PTzYGl$$*;H50-WYF%?7.Y5J̮+?T8-I z V?c-Egcɴ~/4{jD6zN+-K$놱#?u"pW-er}=Nq/,H}w 3K*?qi(9t8a pzb{AP2jATׂ8P#̈{/ى7dI!kz͆ `AXm 7I>e,Y>ϳ|7yʢNŭmhJFS+(!Ip\,T|uN,]YnT!te.qfxs5}Zz2.MJ۳ )sPTt%Jl嫌@Ċ9*㼯$ \]u+\=h8!oFP񙰋^ ^\x5y-s`ن_E@p} EޚEOcE=D^@k>pɉsehvxN:+m`$6a崫HnIUrU+9Sc;Xax2K`M#k8"iRg4sJdNT״B%\uU $HYR@ˆ= ңUT8` 8EAUWS%uIz>+EYbGwi`4bKj3' eQca53A>Q)>@.Q.R'MuG Hpc,7;̐H#H#k|׉H( |67u) j-9/f^ü8z3#T00Ffw`}bĩW[PüXF` "N:2itrp߃TbNo"1Ὕ5_?/ɹ*4}+^*fsVclɾDDy7&4\WC\Hu}3p3;r`n͐Ŭ$\Eי vG1>ia}YQ+1"*-{*ߩ"՚Ћ[dov 0yzK6n#LzsSuv:II"nlXNb ǯדSIS|>Pg<.yona<5M,?/Ѣv y,VB&lǻ:\)Z9 X vWגft+2N*sW=WPӈ oS(fD|ǏHPH`MMqwcIA\ְ$ɮ,}dߺSq$.FYpwV[cAt:F;&R.I^qGgD@ (12#R"l+`h #H&))hj$!Z鵔ZxH(-fg~HsH[))*F ̳OB;80У#Y1DE8EM^W2"8z]ᶂOmiCGŇܝhy_k)^3e UDq"a͙29K' 9p"W+ڴ0Z2ؔ7">,͕Aӂh rq 4W&ﵠT!kjhTx讽T@RBjE4m#s ٹ$#5 ڢˤةˤzG=EW,ud8/l-C1<UqVBo%$+iڋލ/EWC*Ŧ nZ:5 0,ĹL!14t x' b7C6BͶ1hqmӮm;.nKrAbP%I*#nТ?=Z>WG+P-플ɧ BθDe  cNX=>~jfxM@n1N)9\c]ȭ,ݵPOcݰ` &eSpszۄ",:sZ﷫ҕYjY~7]Gh ~{|(_"ľsxuk͏K\ƪ:NEKx>?*N>y?7GHvoxh7YyTFiWXg]J hb1Qwny_Somݢ;jRhW(.ܷnL1XX BT'mۀx=uhuK!O\Ewt*J "&N'lTvdAi#9y6kbb,Yh{0PaJ!9VB! S*@ 5ԕT%44Ԡ!ǂSF/9*_PƙJ  ҒEt͌&Vhf/zM-ȔUm[3xQr'!p|DIn po7phJH K'm'%M(R$.ˏ$Sd %Rs)0ҲLհ@@TQ5SRGDZcU~ eԻB٣J;kPSԋq*O9kPo?u葒^eI.9Jo>+oDw`hl; y`xpaTOJC 祯],_rVmLw`>cwx7?U sr_@Q %,PJ𢄀eٝ3 g)_67+߯1ا>r_(p=~*+/p/ i_m a@a[-Დ*Jk6nLJ*~),YؽJN>_W ;."pF>n2H @0AΐA^M0 ]/''h| 9 #YPA<cV?Q=YHCWF3q=SCZ8,M (MttIY=уPcDAj㓘S¡tMR(ށsM'x(Lku):kv?. 95*/NZEi+S[>VSΩ;ZjUkӂ^\T wvRF{zuUf0eM*dճ_]4p?=rMv`f싞6v kvo z]E$޲uo|;ykȶw@VtEᕢR[0BFB KjXPQ,YM޻g0 X~F8$gC{na#$b-~[͌v.-,?37Lb{;j.űp7,s\2.Qp<(vϥa@SY+FHI#,eq^$=~lmB%2U1ں@RXfURcU-/ؚ*\ kFm@ [zeby]%uNE>݀Y/ͭ'Χh2Hc.ʬ~P)c(} Za>6ahc`] ɼD1Z5xD_yH1AKBKY zr  { ~K۩Rh\BN|EQVR:+?0t0t=V JH޾{!m]6N >{j8J%f꽾O@?1CαMc0:vJb!ݵ5XMYrSD b!k1 iè'K^,W.c| p"3̤⨦=ػIpJ`H~d0΁<^d9< )HNFE4x$??oFVΥIXk,&,adY<9:=x 0('Xx)6[NͶh[, \ղEg6i7|P xB́d-swJmbUyJ-/׋#PZmL@2[s - QUj":fd|*B;G1j)tELhܔ` @ 3%ֱgOmȰoo6%6A۔ ) (J85 ԯ3|94S  n:/N><;޻OԄ!/:9aa 05K}Yz@}@~weK<2Fh=;w޻!Lp:& y~8?eJbua pǽɣzN٩JzbaQ,1BsrNK@G2 aZ#{=$3ΝXg "`әzVPݑ>Nbߟ:+8b;Q ^Y>"~U+6cٞ|̶cW~_~_iuJW[Sٛ[ߟ1l< l!Q tl1~\ÜlR!F+w8Զu]Pcɴ"ҧ2 b.7*9p]!-+hi0ZbMIue4T`b 8#ht姘ZWHN]jb?i$Dx9(FJ Gx_X-)'_ȼ CwFzzU$C9^C jHKg*/#u AF'!7[)-h6vAFTI IIӛ4FˮXFtR c.' =Y\\7+߯^h8,AU( KI].SVMNBuk,eZ05C}MEV+J!QhQ.wzt; bK.+#WƂkRiGsRl*7hfP@ː <<8#p۲= vwZ<{Jhh_qOx=-Dlz+a/Nz0C.FAyLųѫkA/FA(mNɓ\Ln{ fBCH 3?y82Y\K._݉l>+:|SF$ӳE:rL='_0PiEwsvbq&g먌tgUϥ|bKj EtvqyE-n˫UӫN}T%141f\ۿh)mAe r5ɼ46 R4 JYr1Ftxrd}ĐOa gQ֠fqqS )#Ԧ~:NJpJSoMg0y#Ǔ Jp[+˲ɘ\7FEvq+W/0ՕnUH Wwt^@&w>,·X"ɛBJ C$0Yz9 PR h +(@Ie`@k%P"P^N~c@ݒ l ))H e 3U6D\ib1ZKjYZVAKs:9ďi |vYG$.2]3c{bݳZ1=kHy詝FzJYD>~^}XYiǺvNi[~4& ylQƃ)~_xzU+UZ",8(pKC[@>\~DH# 2oj?ə?*)g?53^Y6SzR~=ږYodb$ cD`˕;L=yaPei2,rjr'R&{F/ֈ;`s$M`IT'K8/^M6ČVܧ N3A<|ZA0 xJ5D ڔ5PAk8\ M 8p&(.{I #tN?hɷB3-[nT;aWs0kpㄝ/owemI 21mC| zW ˞yN3eEt2AQeVVfu,&v<jjDQp"Qarp|􏼞%֯GptqrCDGt$@vv7!W w_f#I֋z5Vs%;Kb(:-V\zh R_b(cm4R&EJ:ɨFH@!Af4D8ٟj}Ǡs/_X1O=! ؈֢R zv%ipDi,Ȥa&1զ$d`ȧ&i8XQf4 h1rȘf²`|r9=AAR]3YK1haY@6A.'ҋ8βubyݭQr1{s)i"z{ѧ9ԥJZSBC0K rfD6 cQx Z5KBblSԞQ0pNXTThGOQ;(q@gIDo'_!* <d)1"J%.\F(בS.<8^A('khLQb,Бc@K X4,rOB'"dB<YH3h[j0oқ9z2*{z2}N'MnШ8X^Iݞk|6%g;7ϋӣp7kceɢ?)EPF3%;{{؉)#G/r C z1Gtw(n#ȣ`?|{tuRB$[ѥξ_,|:'/P4;ntw}cm/ e0>Nqs..8(.AV"EXBΚ: BqA\3%59p7{Q#EӮ7cxq3(Kx;. 'ach<}ehu&o"3ҧ{΍JOdA'6 ݩﲆ-ʿ#B^٨v|B1FGP!(CSYC 7 *-K͛4!s~$SO_\:|ֆnUnC:Q+uMX#Fvno§J~[}׈6i^!$(..^O5;(tr(R@sSU>$)#6V 3?_j/C:jdG\?cc/gҚj@l>zh"^"BĽ`#ZPH}^1KZ2B ˨)KaK™bK{u+Μ7o…ղۉ|g۹?ڱ)Kv6\2yU}ɷՁ'ڶo[+M!uRnṛfdQڂsWcBPgi[jfEnRjm:fk*V34Ǡms:WM PIm#5uՔZ䭐A2)P:PkLm$'V14PDSGR#Aꎥf%,F%Fl'Fۮs&c-M5k6> M%R/zx57!\Pr8›WTjG.;:w5arC`0ȟެ=Xkkxo?!-*r%\?G;y{knC@d>lZ皞zu^\E'۰Pc^1 E zYݯ#۲y+7?=837GP隣ʐ8[װA&&; #yx7ףPxm|{; X<+8{pL{GI-啀nJ1Ԛ.|xp2Dyvf6EY/7ӕWbq\P~|/: !o;. W rk,f"1ӄhq}7R/﫵Sɜj}{w^%҂r -HhLzSDZlc5]k:<څ2CН7a2yZ|E(gLj2a{bb:pM_// 688NoDE)Op|`CipM~ie"@PmYV마ċ1y?go59wFM'.I=333ϫl\*&I;.eklĘ`6:il~[IU~+rÊ|0wٌ8]{{YȀ8y ܶ!Oli ҫ߇&s]Klj)+^_גfR⸞>7K<9^eYݎn&GQDH05Z^06"EP}rQvg;yuy^-~:#Z̽5GMGUrX#V&tj9L X(o]P=6Zkݝ^\",L |OUklEYHet I~إ1=u1^8CjkC]h57_2(*8[%!1p O:K$ 7vr4" CN9Yڌ1RA77ᒱj׾eA{)qp{|%W3\N Ω?ȩ9US5'FUdBP=Ckˠ  NDuOM)*pm نud4$O5S}Fu5wcY5keqYc.܅Qrdq3Ϭ82%ĩǟegGkk ~=yvp_S~vypGx&Lg$ݹIޜ'*Rl6:.^!X >Nfr`{J$wuŖ9G7Ih:#rݗbSqNxlPO]ryGoQy D%AX?26TlDF4Y<В$:kݔ!сwDR_m\PTG.ċjFYi8xeIY/AQF:|dINDqK>xfY4"JBz\Q|Z|ndpʁrր:Y}Pd2Z\$X'Z6sw,Wh~os%W5U""!t4@rCƴzCi' ʫ"]0~R7( AL0^Yp, 2p/NO,y ՜kZk:&R3!~X$|:VFL!}d`8~ x+)q! B)x,X `6AA:un#ʌys?H\W[ýA*siԂhgr[ia@な|0Lx 6_&nsq(q6F{|-䒩 o^"\&oAiAXoH7/U ޵R-r^]v3(Η74WEԃ2!C+i]'w׽@[TSnkz"R~*aŻ͛qK80n=hz|bpn=N Bvyke+VZXǦ:_:]j r̟ e59ƄuP{nՆݾ֞~Ɏl?8!@oeGf暲bc{|8Uo-jd5"8콂ukq{AΫ{ WpB(TJ/zDGJJ L|^t+\Sxz@ȗ,oұR cgc'sK3-z|qz\#ɂ-]roNLg yWoEcWţS>~{tuR͌Rdti͓!\(ևSṣ4=,ٻHn#+|nk*bvG1ɗ9&*ҢH䌤wdHeK' $y^:'!}\IŌ->NPy$18d rY,i7ĒL`FoK?I+4"w+ٮLFILI꘶P/lC>q+ќR6)8 FSQW6Rc`bq)&\=XވRA06F 0ՐJ|ոft$x$9aO< h(^V >/5rըiu#5PD r144"$``I)#4 yES9Dg7N5F oLlOj)&639өSL|agM4ʦ v߻i= ~#ĻMwtsG5wn),䍛hMߜ{7;~#ĻMwtjh->ӻ7nQ68=fY#5`B~#ĻMwtjh->ӻ7n16ř8\J&L#-֍nxϧ ɄfVccr& gв Jcp&c6J͒! %I/X2Ӧ !rYf3I/>iv}%v%o^.H0Ճ-$ҳEy?8}8h| o!+r E5d+uhN?+{t~>b#=t|ccSD-[ 8 ?nԯ[UO7ۛݸ6.p`Mvsv{zyn>}RHN7?xŘCsE{ w 1 9xPNuqo7sJ'#6~ ϺQW&zM@y]uvvFyj#/ ǕiT,1ǽ.ȎZ-1oya}[ngMiyrc+ㆷm͍S2(OQՊ]L}|Y;/*~iNƪ,/v)-ҦUuڇNX*LP`9X*^er|ҺU2R/`f- $5sǧ42vg|ˍkXϑ%X]^. ?Y?Ggti7ЎzYU.XYeEIUEa,HQH!TVBYUk!4YUspnM5g.n n2NR5ۤݟW)qe)͢aqĒh;$to畖 )U;4kiaRJaSJ3bP`_٬?*#)~fŐQysJހ\ZG_+8ޖ*F leEnMƐgEfue3NCym*]q(ie!陪f5ֲř @=Q-M+ms&|=󟋻/nnNvs?YHiO~ʊs?]ݹMC]MN\^B>n>Wgj,uwz_˝׷KajXҦ/1iy_z ls|>oich&Gj#KsmfAk^9+!ϙVUYmxs+[ڽmOZrHf`K`9n_B ",Zf-R%2PA$ZxXaCxeh3}yhiYrSJ1h@\dZruV0 9[[S,kgkB Qi=_#_a[;JZOJmZ5J,{m4;Zr7~Z#,Xv|-ǿZ@n_M< {9bAis۞w_iAo#m|boi1pW'i/!iX~ X;U l%u>Vti5y|ft5?>TyikwyAi{N=FV!66E` p{͇^я[FdWb 3 fiOWVNr`k *r!uQJW r[(4Yj YkU\ds84.;Y,krnt뫸mTvWcG18ea"ڠY.(Rh)5kEcPEd]lPpΣLA!ʡ #rȔ:nK+BJؑab2ש9d㎊%E4+k![)$h\t-}d% -m/u еջY /#ZүeS`G=p\_4gXe^ {\c٥#{N+msʡhnW$!o?BHMNmƇdgW>_s-vERh(Q"A`r0{shW d]@ale9TZHkKYri\ LU(k"״FuBW 6]ͯ6?'nU9Pz}7=̥ΆXK0c숡(,t}WTl$J7 IYX5Z̅]KrZ5x "#M#k7Hɏbx̓9 NP+V!p_ ʤ^&l^Ua^rD y!rPƕuEh@bbY&SYxz xz>ٟBڴޠ?ܜ;nU6+} '^i3x=@Ys?n[iX;"ƫ/b,֗@XGW&Z T+jPd] fr[d2U4iZXˋseFE5'GURPWPR0+r`@ZE+]Ah%T;' Tįxi@7!I Z9D&N kpJM*5LT?RS@%5j&'#" -G䏟#`Ў`"d" 9 r./(g'-1GSCue'6~j_*uaψ?gy73i9O>b&`n0i' u4}(N_AhFI@XJ!Y_:cJ: ftf|]b$3 Ѩv=zT|=04r礿G7YãkU L^G:-4z7SmEw޼ NODD!Y).G\ Qf 絠SiiegPLǽqi0"Lr90D2aGG@Qp2Rk /:C1BjۓZ*slxT9I1@,-b=#>X䍸NyW]$opK|]"/G_Wcbq7EoQ(;bR E]JI]kQ4ըMFSB!D:: KW놡Mۉǖfcc)rդ9QV3Vd)|"fZV!it }ryEs!/rI' 4m:NQ/ u?EK:rFZng, gIÖKY 30Ks2\*|f9Y>mbKp'd&/-t|cG-5\p;bǃ/^Pi9_Ո~G͂-ܨ9wA\a[fmVvK$"_,+b@do~BJCuM8Lvơ*m)x`5_ڞ,|I U|q7skͦ:mQ7ʮwn8]=^*S8w!`fӈRl҆y_T-01y/.Z7sH?=Eŕ@N/9*J(5CmPnUƳʭZ\]%Cۚn`~8 ?>Lч8zYe.>,oo׿vypPow'ˋtSg^Eӧoԁ WeZK ՘$5ϙt1úRB+jn+c$Rн{!/vs/Y[ݢ^N3τ|D50j F )Q5PP>gvgl|:XY7$fgY[I3]p[h!rm2(-h]7"ϱyABK@JRY] @ȱR0QrN٬T*)rEe1hY,3lO,·AFqb(=Ņ@4 A'kN*vr jQBPhڵ̂9-Pf~f-yAVH{-x+IvAU؊+YȂ5Erie2SlZ2”&cR,=DKA,p Ɏ$9\RfLē,g|EegPgYV+gg"-ds4Rkn/"[sG{_`V=fPu^}E8;q|tWo?>`˙>wmEuL׫w`e8`cgpc'z--m[=PG%#thdbټ{o}B!f˧ $-O`(_iUN7a`X(eA{ mmhN\o }No\[>2UX8 /b:{VVJH1`e8[ -!f&{Zڨh>nЬ2melŔԴ3V+%c\Z:$"c6#tȔÒ}#;  SA=AQp =S+|Oa"@& * U= ^q`jH(M,cL3i/x9o&`"AQvv #54 zo|tML4Yje px*kzt:PIQdsQN{ry'3Kmg-ݠ]RH *3+Eh>iB\Ʒ i!\E#2i4(8BM>VmPLd|Ga51mpHZkm^Ln?>28I7YU|aHgU s8I2ɫàꀒdx@ Pkރ_POlR,)VQE[sa-K10BX3:VHC gԌ6ɼ ]Ԃxx}PHɤMtB@lWmS"vrpоЧRW{' ˧PL{*]i<- 7|ɒn,4ߚz}jZ~닔iF2VUGj.&[yDutȹ.N|r|𢾽7{WF /=%>u7&v{hwϼC: J<$T$*Hė@"|piӹMuGtΣ }?eZC-7:%TJN{_ܞR 3MȢi9"Uq|ČTHQ[1deeXFFV[x$~NMS7cOfyW;-oQTKg_ ecz%Lf@Z!"= B0$jY6xKu 8ל?O^ݨU&=_ܠıbLj8HFda2y=\$0x=ُi?aൻ,Ztb >_S YM%7JXP*5/ae bȩUb}>Yl z5yL'~)O( OaVc"lGEDqf(Ֆߣu+ GtJhbEcB07_`W}22eOӾ}ل& Js={ێ:ejwj"УYFj ړadGPvѷ`\J1/Ąt.-Ttq\1޵p\[lA[SZK2TyB%T'̙ j6OWT/`y[9{LSzr6ŝg%hG\┩ϼt9͆tzص,\KM-<3|AHE̚+E=yKՖ><-FFbv;}dv@3a;^|l3s`Lf<G84K?H~0Nb/X{fGhE=]F8H5x2ϯ_t<ˮOde?]^9c%EVr,#=PAI'XDH8ۈ=ViB4rɆgkv'4 *tg=I vK QRs=%X  1(hPEk REe "T=!S@˂jcfG(5ԚKȳH7:hHSRIӈ5t&ZsMz&*BAEaS -|i'E#DŽph6,V=)#`R PP$4(JVh0u&z%EYLįPZ AktYig0Ga0'F!e1x&8EYm2dOU V.A' oޱpD3_FWWjzpΔۑ:5q፻3@>;o&f91-k-Y]8h >eg\6~Rs:#ꬮQ낪bJ[e+O&Nk8`/6w5wtNƙRO@ht%0Qv[a3qdtLqU^b)Zp%[fBaD{o۞ m aB"o s&ȸ1L0[S@5%_(3y "GNKn FcZ c4FC{IZ#`%4Vy.RdLߚypD-1MS1̣HR)}8s1׋ j;T-x A{1ZyZ> L!|K0.]lHVwװZ^*dl ]w']\]d~|itxˈ-݌nI,[]Vm~z\LCESR!V=Vie;R{//).po⠣BJހqqXQ u F(&jTW۹9]%`S##Z,;7 İܣ:+_wn;k9,:#x#fr0:!S">YߞjNoR|]@B{c%K۟}jTHKV ڙӡjUN)9FJhR gVx Y-r!NEH8;zͺAT* Ra*}JWD1,iݺ.+2 ,L Cgz~(cDN [2Fp9xzBЄĕ·~Jdt>+BQ)]>w%%w}t'\.+7T|REgTmQj߮RNwu}VKR0,q5 `;B6 -&l}p7kЉ޳p ogZ?x0t|~!l`I|,*aJ^fl-!EL!S?\mS>pg&J 4dqrCm&Ltl|\-V62UOQK0M& 0oCy(T2]sSdp$w{ h+oґctUQ 7RO]J#;̮ٞk, ]LYC F~`Jڏ#$eEj:dV~lrvsɗǶ*s+m?sVʻڃʼU;6Q$vT))naw1s?Zb0D4DFͱM5ev9o|[[8d j6Roj@ЇȌoC4@E gd*\q}@'ك2uc- H1A.ѝHfY(~&!Fpw9$3=˜¬<c.F⽦T"HDO@gfÏCyֶ-HOao+ p;\h'єN!X%BoG1!^ROboHոm^#c=!R6= 1C4Jꨧ4rS"IRik5 ؂AZEdt oLΠI+o)o1+vm $h `+A j%DZ4fB(6 E "lG=թ¢h[ȍA ]pQa$:F"c,'uK5)]GbES@F,O#XN wDƂv|'(z}S` kR@'æ]^p/& CsK ߮RCPZc ,,%qq9O5=)FX;v(qHWĉZ]sLƣ,9E5Dl`=vf|2~]PL˰Ԥk)rd ҅gN64u(5<]xs_BW[%4*1`愎.fara+T~)+"qt`=)=[::*/w]@8S|7-H&As0lHwjTvip 2te!X3(5ɏ̂{K>&aof-@/4eyv^'["K_zT7bITP^ҴzfUpJ7ɘ TǐZr@ nh;O|~9# tnDW# F ΁Ht8hQI\uЈe1ᰵe೗*_31*7!v!p Y?]}- ڨ!挧T* їdk !P NAXY;F [i-yKǩSNtm g)oE>jȮ TV 6Ef@!`|ΰ'LyfENSf,C:ZI2|6bMemHĠXXrkk@3e6DE/DN >Uo,S [o)W\*F#ARk{X ۇӦSNu/E\pȒI.ֿ龍@T3A*(rTk^I$P`NɆX%fl518W{@Դ:LB*Z]Yeha75ڭxU.?ZۧC(^1ED?cjQMW!}7bû Ewzڟ9߿|T\^]ߊO[}izЕaG-Bqh{|tDFng[i, OJޟXkS唶 9>c:[/#KԻk½O.@Y ĐGCV` o4ϴ7` ηsoށ}(QS Gkg7ㅅbT۩}Zm.Sw<٪'O/On`b$GQb ?#?F+Wb=0_%Sysz\(rP>Q|i?T _/lGh$ #۹e Lzw~B5Έ6H%xrvv){z-.q_~V2:X4;"wvk}}i_q{ap} swϨhqf\mФcu k@3%Rt0.}p`?lQPdn<%\ ܰ=zB%1Aʖt&|TVUϟ߅Z~`A$r&~|spcN^p^y(lK#e_$ݽC轟 ~վn<S]$(|8dlc x^/P̸TkL<6N1yjJփŴTQ?|3®cQ{gff%C;3mحX4U&dz0N6_ԙOd~;Sun܃dΟr덟&_:?<˭uy?ugki$n-nGVH ACDAiQNv4=#p4<6OC'%fL+z{9eKTH6Tr`ȕdX2hTc:P  48k!:tC)gbG܅k'3;my 8C9a!mVgV:#j{@w 3%xeUsfśB ~`yoZVjY-A6)ADmXkԊkЭኋB|6=HdtIŻ6+-["0%bj qO)@1S{ŵ >+Z\-y+;b"tUlmV3&&SEPkĉoQeJ`WDYh1ʎu=wZ-^]7x)6dÙ㮹|6ig203 f{S;pIj=7tIy}y0 ~TÓ08^Vt, e@s 23˸% +,KH18ItzHdk=/eőa'DutxNj(H꺒Иs6wڴݰ/s~>?gg͍tϊ5Km=_N{&(T,[zr ?c`䵊'D>s';B4`o6["D.8t'j2̻=֓Q򝞊ޛ C{U)Hk3A5hm卛;Og8 Ƶ<6I[uEFH4R+UKȱPu2h` .H4)gNEh)! 5 툪%z8g{SF2 dCįjٱvd0>7˦G;Ɗ@%EJwKBf2M hXHzV4 v}I"wv9C}kJru8i-G+x ܢ$ED( ]BIXtP@A].n>$O,PI=wkv 4Zzy ,S:x^S%zI'/?l1 ̅p±j% C OabŤ@{|5-(0 Zǃgr6;e5bVo6MPDB[7݆ k4PT<4fs4?|3Ž '}4YLчdJHC(*M?&& }5&Pn=GQ]!$mSu.J פm65Aqߓ(cCibŕx4*bcSiH1:@DHЙ G& *iL{h ַG^ޱSdoZ  cLxds\[qM'foueq>*?ctWa-,՟?;2S8>WѾڷ__$]^O#E%ۤDhnׅltտ}"/{Aϵf|4z^ ~DJ):P9oZ'@pŷkn )zqgo=>NP|8x=Lɮ>uZTѠSÀSELrr\خd6K$B- 'BgR5$~~d.il j gwJUJR0M| WoĄYCfW2HxmTVNSŹG)vjgH瀴bQ`nϗw{*%/oSLmJ@ý 97d̒3nFsg8bs g߉Ǚ9r%Df $bNYi=>Xo;|ˏvkk1`PKvE$ضR8eFeWIDe(Օ\! JB1S7ݹԲ4r?;VkQC~NPHB;WTePew0Png( K:7meϭBznkFZ/kDu _b-(%&Ǔfp9{ksC24f]Dr :^?OZr~usi>eIuh"]:h͋XJܹnYtZ'ϔEr" @EAj yov5GDb5F?0(AarёSH`,=2eP@}4b|g9o=fށɎ̀eނH&mj*F=|s0OlVʐU7'nBh,V\#`uO_vX#-_K3uC8{ohִ`%#F@qq ߬f($c+u\}ʾOy*Vn']w%zr{bza^EUx ;1ꆽ:=OnC@eZ -j.1Qr" @R3gpB|Er&dH 0bd0DLf {Lfۢ9y}aXU s??v܆  G~g\b.5-&Zr nIC㇃8fŬ$ӿb<7maư72ܥ!`Eӟoe"$kS/x|'p@ƊcYxbXv:+ Ew (@Hx k+nN)cLRpuJ%>E]q綟wUr4y d%bQfRc &AU }k;+ڨޟ:li.-hS9}p46Jr>cM9iGʯkU glq'Yw՝^kCي\rү4/chpolo,3;btnOLv3^3Tʡ8dSѓ>a)OB{_ԅ%XCTl X#Vw{jz,nom)dcLa# 65NT8/=Ɔ ip"RFhbF;d~j9[YknZyҚI>O0N)xBcĪltg1&XMkY[QmG8{ Z~IDHIBBzb!Z)$\g*"sXS2\#BJjZW{p'/ot@X~Xym"D8i,m:ǂYlp68:׼&JPǍ@?ŕ|Uȟ} J?l#b\WNd9U#g6#A>\}f1 9wg *tb-8WM8 )۝t!XH֟;y]b.]Pp҂ pIe J Dnʕc&7W\:NuAS1:B!yXr/hl-)L*he9ҺI3rQoD|!\q=$MՑ26X>Gyw蓳EJJ$BS&JTՖ&B18qoq*bGXL#AޠBzTHwAg_f2 $9ѿY_Aע1t2 uNaJ'lI'!rJRY K=U*IeBIm :`+^\EpU[.,2 8c$@"NrSBTeV#`lnSمuBer!h84FDS?|E??vSF8: ˙a8Bg^.(mJu7[),Zk9]:B_dIK™X6Tg4ܛJECq ,&1҂,j;A)Ko \\lTh(I\bb*sӔ+ jk!Z\c˒9L~ B6}(>Cc|iST _rډjXbB&f=DDJzueq`7Q\xb d`dUx~:Q^A2}8{T TјK-L3.ūyPQS~$W@dbEc:;)Ȉ1bH1fH+Hb-K8u XB*,-pQpb'bi1\ۂ̖# 4J\30t,IhebNApI%C@1MA8Hd IBL%=ax #pX{9o"F *[PhRAׂ*=IAPS+I\,}1FK..i*q')f#<1`ðO8鸆!8Cm@[ "Ue33B6~=".wo^eui(E 0]l؅+] 2c;"&\܈R:nE*fء:N -vqK[/P ƩH) 3ĦS̙LYS MP_"hi# AVG)6NgAcl< e>^s,0JI zRSH} #hZiF 0f2B%t∌yc턈= bc9#; R*i{j\Sx$1X4cAXnV5j8|+XH͐)@dkUT N qHPQK0<R jzNr?pSsS8?;?@_?,3όK;/CswQ6c?ei]l:₫ts]Y3M& 0e:aXd>xZtmd4m׿O?_wmyx5Vrֹodk2uvo 5j`-I A@mr^=Yz9&:dJ~qu~@_FQ<uD)UѯQ܉п3_:~lxݣ=9(}3}lkt "igCv6<tփ`/":}=Ox3zy\)vZ42v.ʪiaVGfT$ y0G>n"彶F }ekN҃-o*>~]PD]> ~K[SRaT}*GW|>Fׅ~ۿs8Yx f? Ëˏ{sx ?2ѯ"L{_ρ_͏yЧ۝qː˧5]Y7?zӑ/ /c%Гu㨸?~q{^adGO0pa~egyjhMv0~_ +!ŢHhPy~OF wi_lCQ|m3#ލaR5w{X)RuCXa?(%~nuA3Ӱ( ʬ&1. Gj\v~z{aqX#fė;5[YvuB.s&(0("ʀ׾>]:zח?:xu?;y3P2q^nL^?vA&^<|ϊ?토jEG^Ux 4.Fhvg~Ǎc;5Eǿ>)O§8X%%F2 nay縇款/d\+&2egfb;l^)o( f.fX;+nN)v RVWKZ4\A~!u~Lo.TT yw/V1E#/:6Df#e<idћV`i}2_ ~͛75Z^#M_Wkahy`-LeCY )[`MjmfYp2k5Fw>)D0PL!PT#$rF2.Xc)ֻU[gRwK,)sai$8̡}ʵboI0ܺ6mrgV5NX(:lC42GEd7|}f. 8ݜ5[)~˓Wkb$tUZdk.V~]mo#7+ _dq|8lCa?_gxlLrYoQVmlۺI.VUO=Fh@[h"Du%߾Yo,GGA#KgA۔s'b?JDY^Aj@:291 탊3OZDs{(;ꄅ#`5N좀^S*0X|4,DSNg@PzNwAsK>s}PTKJ_oiݍ {e QSb~ioU2UoYRB:`*(!IwaإYTf:2RW4PHƲ]02$fkAs $e:Xʟox@K0-JG%(zƔ>Q"yIkjs;Cz0iXE9B1EWY ,yM2j cZ|J.ϏsqAGV)a-:/㯧5Yk4K6E#ss7`uHPUWwݯ M#v4 3NfI癣3 +~ADsRiIǤcPK$EV)LkyʐiNYr.}F<21&t$MLd&yOr|m6l+˅T-M1腡ǯ1\B+i[Iz3+[tŃh٫VVOe)Q!O!*f+2~Iۖ7e62'栞X ICŊo-J} жNr =Hxq L͸K ӾeF.Ue79ך~|VZIbKZ_7vE(py٪zpHpMWRw)ٻ^Z1fz`}l5̔D k`|i#rh UR̖$}j9J\FFkBzzD,E:jMl^5e.[ ,ЙK !zh :m9CXy es}}H-XR[Žnn.,Q[mHRƘ-N{`{f}S iK^7t̓/'s{EɸM IQ!"sGQ`!pe$I3/K6ͼd3/K6ͼd3/K6ͼd3/K6ͼd3/K6ͼd3/58u? "X8gօ(ۘ0d4DCoq&TiUXUL_3ܫ0ua.GlYA"0RNVi_%~FheBMc*vC:h8&z/ުb VLŤ >K-@'·UF:p`SkkXk`|/`D A(1(:`3ߪFkڟ cż=+P^E9 CBY[ v z Y`FҎKoQhzz]QtZ\]SG> Uuެ&VNRgbi/.},G кj5@= .o8e 4M9\\xwLx9m ?|2ix,!S " J٥2w"MVP tsnƂY, ! !KOLK‹>4dϤQRspNF lN3N *UFISY=j)/}_.KčėO:.^l=ot]LdjL72xv A a5g}TRYEY Yo@n$w$eaњ .4ېwikZ LXWQv| 4DžbUg#n+͵0FhńQj\M oFy+z_tλjE|\S_Shl.*Eޢ–m¶"eS(+ʽ*qKB6Rl|00uo)by>N.{/x0}<$>F9仚WQCnD5 +hvk?Kc~g+9/C ^_u-nCbvBPgrKjROe4ilB6TdWڐAx2h|pKށ2BݧɠL[K { h ې=pIrgG1:/?-iVk 'Ɂ|,8hQ1e)+VK0k4X.VD$'HLfEoâ7kG{2n<.vK4GQro|B2=Y l?x#j E3a:PsFyqv3a<̐7y1sڴҔg;Ozg^w'$p]hufg(Ā^mpߎֱؑUqMi%\ЈW^Ј1C#J_ڣ'*HO^ ZȑC MgEtmM{\=A2ݏ !p|^Xc~^3G?jKTX@quyR80! >1}K1I{%d(Ie'8N.^g|`~XAD;n}Їm_2XXzZpūg.ً8kP\Z] dvZb]C0A% "7&{TR @H-.r\zᔉYKQ( r2ڕRخTuv;5ȉ *Sf EzӼ#zo) mT"E)D21Y3hƱfR`1,bՇWagp o\AzL SC+Zy-`œ2?iPE-}P1P N1)Sl뭂)UyR g )nH1)f XGRLx$c\R=7r/yV')Z:57>q^C¡B5*H-Av +?ș'$ɑ;"J%>SA2K@h30}h\ZzE@-' 33/Wֿ*B_B.66;Z.-_c RҖ M%鋳$:"tr)EHK鈲u}̠cVw|u1 pmW ůt\,йgˍ S|'7tXJ/w%n-}ՠ'[c@DכֿmxH5ynq @˺Nyl{>S&K/[ `FkW )luQB@.i Vvȷkp]9 ,+K=E%E% d=įlCETЎYr~4_.+/d\7./"^9W"h\Ca<%q=d%׏.CFx~?ٻGn 5ߏS!wɧu|zǞՌ%})j%J͖4^Y GXdU9M-NL8wh-u`e/⵪ !nCfc j`Y_|I5/L Al_8hPX{G/%RճKCg}#=ֳf~5=ӝ@as-)>g7\xXݟ"0 +1ҔpmݠѴKI Plϻ=۸Lc0B`ʙ$URzxdM/X2ʼnYC"`@pZv1ê= p#, {˗Yq>E),\}T^j\ԀM`^PB@@6UÝFI4Q҂nm%ոa QM90rKw/%k]ɐ;Ʒgx8sHm*ͳd֕VmEx$#3^1wDmD:C*p.Ӷ`k%Yyl34J})rxaW)j[Gk>Nm{'[-Ὡ`6D)<$QKPW46+=c.&NꌩiY^JsgƧNlhV&%`wWwX(.͕wh 1!DRp2I.Fx!`pRkgm)J/8V\heLXt$zABqӨc7S9+DžN{mcx;ypqɑ2,nkS/洼]'g3 F0{AAF@`4x"ȇ)wjnY-rW-A7_-~78ZĒj#opZJZĚG7U- 9gr0 j02Ikl"iDK >Zj0,rrIh.WD"q:}^E$jYo7Ad]G[PDdpzucWrq3v[RWrP|{Ĺv>ň>9o# H ,n?9N+G̾{U5@V` dIq! CɧxwW3oywbS#, Yrh#1ҫksio\R6ɡb})z\E>=SZQ=yE *uKs89 0-2cqc%!-L~GD'29 L-&I/cX)^ȄԙhiNh\{bC'=;&V!M0/$&p P#g(p$ch[ފbzOJ4ϐq>a2dg"?d/DgCml i1*-~iL;C& `6V3S((%SZ+[d22Su1)ZnIP=|2A{q]ɴh,Acӳ$]?zf,ufS,_{jc[^Nh87%>n|+qWԸV{J1rNq% !έxI/J}/IS" ztoE5(IMFM#{1=?ʘp;U50.1ĥ qK"I G{cU/@$Dc5%MFJ7vvƹXs"*]e%Daw;Gn.~h,ѤwfnKޱVUmz[Z Z~v֪GؒWL'OSrwZ}(_],) aWhl6rQԸ4j9Z2ZDz|xz1.[_]? kڸv//o X~揍(i^0c|H,QhŒH\2ac1Ҋw+6҃J d 1 w9Pqbo` !A;c#5EqM%NPUr[9e~z S &?.' ~&U}=bX|TZ(^QnTWG(XM)#1E%K7VhIQZ9ws4D8cl42HEcfYDuU?`~8 nHRrKI\*> d" p^8r_(n(ή(n (Ϊ.'Yo? ]w󿱇}pz" Atw-~;ELt]-ggivB]Ba Y_qvUtfvoY^,|>| n/ #xç<|%/ 9;j;U?YF2ߡhlxv X0UW4  #,N8$lV(☢i<4")cܘ^yALUh^WYhJ3B#zp֋jd똧cסN%.vGuD}nEe4ZWZ'䶂]MPaaR@,^ qqM>7 E-j.\fq>CdY(n0b~]^_ll/,̶u5m6(peKk-X4Hv#RV0lQ0eW0SF:f _4h'/3hq>E)/zKzZ76O*&ô6juFײ ƺdg)fJ#=Ro *"9Q*e6;s 6RFi4M|I"\ΛHd2 ,qE5_AN6_3LΆfvoɃK .b^&qapRl*3=/Z-?|ٺjYC{:yaN\.M1˞7\$$bV4  Lv[}=y]]G[3i?$S+$`du2IAK_ץp%(ӳoc0`:2X&T.'YlŊ` QZ+ [9b5N:KNp \17X;SZ B6R\^#J5NéxIn;I҂8Qp#1-8s1aMZ@!M$`17qMĥ.tMs{M\ l ; r9xnb{o57"zM<ؠؘ8&v&6%swxM| 17ʹ\/cn☛8&fIj#7i07omw)۽72rVh @FF.8!zՆBJ r$Z-4u:$&;(M#Ihrw9No/NfٵE˙3"heOqnsH?O {sl_! ͝0ߩ`׿f]|*55Yy8y8y@۫*!X12Prc둉"&"Z5 ּ!WYAcF T@H%N!!!̩< KWš&;r+(JsɍIFn=xG=1<+mNWjL58}ūx^ZR VqF,Xԩl BB3hr Li_н0W-S| T0 So )abL 68,F`ߏ ϓ_ h"':X}T۹mok$iª~jhf]<}a]e5k]߾϶Z YJ#-Db4LC mlt{QjX?2Qh!XJM1o٥AnK֦Z<ϰSFu>Je{NS Dž ɸ.dɂx+95//n,wϐW9,빘MmO3UVLerT+e;z:ғIV.u Ͳm8µᘝl7-QmHm{BX?omLPq"bܶrP nZa{1AZN.qp93 ;;#ATKԳI윥|/>4D 2bc(ZnVa f)N#u ZӶg;u֝kاg4g GP'p™Me9LdlTr-WhdHeb,L9휤a`ɋ(0eJJiC9Vb&=LP;$jXC$Phn7z\hk.5=I]5L]SyB_s3}rޥs)l6t}mfϕΦuSZô6k]m.6;mwXm69PT.47܌-\hn.47~p%u&:f(<ƑrwKxjGs3)1zF2i.>K.kmcW{W̸u߱G2*Ɠq9,zv%Fy>/p*O'ruL/L@W_MZju™Tn9kf?_TQR u`̐V aYN=FbAGOҥGŖWGI$Bڦ1!5Dʊ:}0+ś"v 1ZaBXx@ qE H(TQNަrVaNeѧE%fZ'aqH#љ93\k s8}q F`HĴ%+hy#0 QQE ˠA#B$i禂%$ CvP|Dmj3+F4*<wJJ͹G#\&KшTɳ)Xi֍Pɘ ??ec` G@vh:i&/t / )#T3/TrNAHfpZ*PNԂJfrb/5U%VX-6JE\ U!a Y' ½8`SXNdS ǘa0|5d׊6" 7^]n[`HVgjs㦡`V^QP0BxN̖9)DuA(M3Mn>ª+AȇYz:XYxc% uFYO?$)k^CIϸ %a欙*;EED8 W V?c A`90vʥ\iύ3 4r9\dpe5_n4tO^ܥR $ˬNb'ĭpK*w$Zou1򤃷VEYʛvQwz:ů@ 9eRLւ '@sTdq-dk7{/+0L]:W^6Eφpq]`NJYppa!%|Z -[~5|??'~ܩd%}|(-im 7?yO !y%.Oa{q: c0g!UaxX'QdX3fROZ3 'I@Y] 9_$5 I\Ny%Ua|kIXStB[{{}-fs;#шf\뎮= šDh-QPѓƒΓT*bصlw8j?kr޹m:pՑoA`F%4%vsq(W:{myS*5F}#Zuh1|v0o=/jri泿oL/z7 o6IGܤ䑮Y&gk9:t\w&vh:Vbr@69IfKPZmBFgg9vk Rܫ7 ;c \Bսޏy: Y: ZՍkpd<\ )Bcc!@ŸBE 3oqrp]܏\+@D@"1d(Fy-ril(k=c;d-N{Bu6Bj p+Gt*~r컪8}s}gOUbMXnNSrK-Iak1BkRZ0].L`eK 0].`}fK.崴H V}uQ?WG_-𺐆]Hv-wU*KBʄ MX4ag>Є]h.4a$W¬ + 0*c0FB"Wy3V__8$" A<25gδ~y!Ԫ Rg YuY|Vpiuˎ m- k\E5%Z#[ĴoG ;E։ruLh&r%}2>y&V"O n ɬe Sk"(kZ r q aEm%=TvbH7Ds^m=3phsf0쒗@ti鿄 <>u`$4/M?>AړEDx&A~z sՑu~b yg5Gõx_wd} Sclb0AYlbMW5&D [5>ωA*&7,N +zEpnkw:'rMΜZg[u!qEJM MB׫DB-$zm)&8K7{uMA)uM[pnRVoco]pE|UEwr_TNqɾ0X>Mj۴)mue,ӓ`ڦC{Nѻ']O*'sr'ؙݘѩ#FG":7bk` ‰qԜ}p4"ɵa"4$U\LY"pWou^Qo0J}`{9SBNyxxا)t]ѭ!*eM)ÌnjfLվjxEZkhV]Sǡ:_\9&aՠnOVH.*K J#DxDkmc4[Gbt" `Y*B%` CK(ъ"KBt)RZ*\h+;]Vƥ9JüRw6@+LSBE`F 1STd|'IC[$,?=vI|{uNͫ)i%nI_-vң y(/zv̝&t;*}^Jd<:߮~s2|_}u+=nc7b̠CJ7 @y*\#UI7s(+Q"$e^l<>ﻭiy-7׆B$ZCS0 t3 B tjqtPvn-n-(ONӔT\ц ;,KO|o8"6zĕH@~L`3:c˖?xpNK>2Ikl"iDK F蒯w]|irIЗZ&Y~Y;U.Yhs&x %K,Ӌ]ZjjY݌ǫvm15C\Eu\"F8C+KJ'N\|> aU*qPru*\mYUs oRY`w%oy@Jw A3qY(,y +=Qգmx FjAh07w dC$Q%yD2jpфSA*<@/9<1\`K>m\RԈ<'h Yx̘i:'ةf@u;j{@ ;uxUQ މ5үDmſ_1i[:[iTXhbG~؁Eh12ap{)Ռk-9R`E Ҿ3׀@v:L J2gp,f3@O5)Ud :*GD;ǭQ** {8:@ho$xx3Qf b?KxS-eqsSW=̩Hݯ9))_,r Wͷ3T4X%咈q`).Β~K. 4d`\xbBc`7X^LO_ScS *‘HiKBӹ䜱"R r-Hur3GdpK$!,JIu^ׄjdZulcca:Mx?;8{zto6G?;?E}~Rn1Fyγ۽Kn.$5ﮢͽ:n/n?gm^ +[eBi8V@XIfSxإ{}X^o]uxE Y !ko`m$^p%OhpxIiyKt6 fH@QPJ(4V>|Z>4><< M7 vL)x%OhqxntK ؿ(J79oZpϝ$[Bµ eX9r8rL. Q#gU"s#ʹ8 #()9Nxw>ZhTG{mx}FU=O|߾ysqY̴eu%)w)ylbq{v5}`nq-'OJRٜ"P>,>S;ŷǵ=Z~S@s8sDn=XXBX< ]SihS)cx kF*LwY!9pFm:(ph=pn :> <դRՔ><[ Yc.)IR:K}Pt}YC"Z[5$ 690VhjI׌OkR R*7].+ ukSi __qVkD=* ;`,uZ3гHʬp;EPH($/tn`LHՁ&߳_C5krCH'î4vR:Hμd.Ѯ4$܋%b0%uNA#F$X1TÄ$E.Kn18p!&ipasun}m\q;;p"Ay$/߼ZC볢߬O *> POWs_};lj8pkjFe|!s_s&my"L2Ǎ3a͆zs3;Znȯ Il>NV+٨?%iK^3Ms*iNP sG܀|q.n+߮6NS$ɧϷGE4k$9iy=:/y2Œaw/վN 4IA_h.&D!eB s$OY ؆h*upU򺽸8PO'orAXwPGx$Э?7A5h TN %%7$IM!`0Q赳6E K`pYK^pE`v2= w9,tPk{T32҆LN:FL떐/nb=G^76vg8~[|[WWUVkL QnAQQ+cq&JHXJTJE`]`eheRMeyRQbGfLH(X*铏eAG\'*$B7% ֺS7xS=m׫MeꉶMe ^oc+Y`3D@)O5+m!?qyur x1yp%z"@K:Y& eqawe2~ZQCLqmLqey].VWot~(0ãw XJd`Hh##RYgc&15JGt#J^1|)LLУ-3ΗҖx┏/g`ǐɸ?uyC/(tNIc F1c(:X\ywN,2$0DJ 5w7Ww&WC^QG} x'90k8&Lub^G}c\t_Dٸ㓚_T VP\`IJood~d^Ymeд8r(WQ')(@2xU':{Y2:.;@I.z,KXΓe,al6&$0rdşb6}aD0a59ׄ)EvC&LIF5a sz "m𦔤a#N8Jݹr<$.@ùSEѕa_..oom˿ WoHmkǶo_7E/UN*3-K\o]=u_+A4qk=a0M]oedѨ6uծOY #P3Bna .!.)E$")S4 WnS˼LO N'G瘤M4,mt,qx) UjP(I70zt߿mfhwc3G4G >uϣ3cߐ9x"#XZ/Swá[#v2fOG=ǿ:FQi0u*V߁W~3qoxlwzK5xψuPs,Gf7 Z!#H@z̊h9-kH}SbZƱ,AFސH_դgIR%zjT}m4Ճ?*̓sH IԈp"4_o_Nn^y(i>Xo&9b㼅W1M~ؖ@Xԓ7Fo>8{?K_~EjY}jc?-;ڧo$yW:17wɢc!D_mT0SO .ۧ2Jv5~9KDCWK 0<ࠕƙДKdtɓP$spc-j`FO.vyx F"0wr.ZǼ ,0k7T&5T!汍NЊq+nqrt߆4|ۇ_M0:#X3ιNi| '0&!`ʶ\~6c}߾ǿ/Ƿ_ę;cjfgW٠'?;;+XCuꖺugwQ 7U.I"s$_@+ze2uJ6|6w>!?5JP\FxюX=;C2+5MC^m \MҭvA5=;5E|=2.Go  ]էkr;58)xPkD̷e@33/*yK7eOj^$R7Uj$tVbȇA 24.OlLbW|q[ߨ|yp;'w]xX15ny% sN% lD#A"^jĒ"h4-?ݻp7bZ×7o pLB8W:<DQЁelsZV#72NURp-9W}SOEb"T 9\*pPL7ZWl ʵ4]IZ!*U 0IYO u$bJJs4ɔOVh>It;ﺵv> d`3+uG޾@ӣ9ܜIArLaw__|IrYV;r}G_goyX~1MFoR|>e GkiF$"`@Jd•5 i𼐙^r4Adh <DZLM2O:H1Jbjs $VsW:PU9lx#ANLkR_@My  ZhxIDI)QԪFUꨳQ?Zh?:YП~eIFYj-pjǀ&k/6Lh<ƫrչ'-"2Dxz&QۊN5\M.mR`,ĥE*M,N ̔#[dQ9EB רI;ee >YŐlP%Z+Ϻؔ;T5nv(e,<Yb6حcQ XR0`ʂ3s BXPЦ추 ԡ^6.ܤڵ.d9SU'/ C‚ݧ9;VUN(KqgP]d benh+m(:VPpʃLh,UT5)MP&x1q(6 >yN^mєD dPU8rR1$\{E-^2 [B2"^aL"(VD/HH 5#uG̵-$R(h,!,qwbU\Tgg[_ޑ-ᐅA[F G$l+0urō|褁So5*+S)olb1<0>lVcAr!xRh',:F=12a k@Ww+ S ;_<DO7j<=8Nzv4-@@E }f@FdG`e  WB;]`A"fbZ`van6:fGC0Q^C 'i[!ҢIBK,۪"va*"LGAFqXU OVb` )8V(,j Fo S&̀#$^9xu/"Iuwq8n@N /! F8oqE)]1˸gD dQ%:V]@i;?6wѡP ZVHY'rdњzaa  he'Ria(m:`RRAl['쓅AIZb P9z>2 .4X;M6X;"<Z@C9vYT֦u> 1:`4-o?CSb pƻ0*2 <c0Wvn.<Dt4$P$I"=gD1`"2(b cŦ}8YyJ *<8Yh<{/ůq(3}f}XXQ!A3wĊFB"Wy}3YҨVƌ| pF1J:L ,ޘB y.]]㣶yJ|lYEj bňR),8fK084X@ sp6UDjI)U@-l7Wr"UIIq"$)r7R<E#L]^qs*+7WI` %g`>{,~ b׫Z[)Ȕ9 1 66Nt6~iSVG PiFst$H2dŠRIَrupY7wt!B&˔6R,;Ծ ]s.ǯx@s+uK.zYMįc^13'eK62H>zT~H:W}>fx:g凔 XFa>w?ûc\0٣>zWJ}bD6̎>ܰ kn> 1Zl+KB\Y'fqVsI7<v X J`JG{q EZޱ< B.N|5*K*GP5z"aVaNgQtJ(ځk%큏hUbMm4>0!6!hdx][]}ѭKb--yD|2hp=ydØ7RE_Sq&JV1MsQJt OD`mIT1eEcf-59SM2k Yt1p;]=;`LԼCj;w zdn!!/EM}4-\T6S\|}t2}wtS_8iks8⒍LɖуsiD9EOϦrYK4^iv1hپoP3Z1&Ie󒃛-Udx +slkϘ>OO7݋Ňy%V-7ۋoǠW(D?af+\25I)&A)yZe(YQɆ98XZEh*5ꍍQ Za nজ U;w9h- /[LzTSc01BsM<tǟܖ /b9FΨmGg|YKfl@T~蜏7^tѐ*I~p 1 f¾!t?#,I1cbΑOyLiyǘ Y8H.dL^Ъ+$9(f^.Š(ŰW(GM:gд^S 4x]Ȯ;WDY"Jop$0tst*IV5sÑcˣIgM¬&nkg\0:ۇ+cdQQslHqM"F@ 7 Uz ;@9;.l9zpiDžԨOoyAH' GQF80>r28l RUQXu(՗ m%xp2dwcj9zp=/$zS{ziEa!ua]yK x &۽5eKG<9)+^ 0PR!dqOdL6.΋T25Sw5E]EӜI=XjW)Rv%ԐÙ|HVC7/G7Uˇգ3m3/Ɔe]=ʄ@m.R2.|Ed\t>~"M횓F`˩6hD Tc}iVi4>Nͪ E )ox~Sȴ3*"+Q8l*mM0(.ޞڼ%{j)∌WS+fA|FZ;-x;a Q͗X< XGsݑw"KC?{Ϊ!Q_W'8jY ;3؈NxYYYb*fhKoj9)B +,fcN{ʺw0ܼR99_k@sfq7̕ߺG/{i1gS0CuBb1'}UFi7¾8r:y)#;=[id+f9jWS3)]2G1g;(GPub0gs<%}řKNǽ9R 7+Kt<9Aհ*3ۥCcDI:%+q"SFKi9iL弎?$e5>FYε!cD)gfΟ\M{K{FNcLѓ-vg:zo8{e.1= 5jQymYg|7 ?޵nNnLRǏ <)shrzR2B ̜ήusp7&pʆU1$x [8H,c! JpR!۶iaSkpYq8N8k<9܍)!CMoq%~``R(sJܜOY7UGjbQzM߽{wcwIv#YB#\8f$WZ6YBy2Rg=|dw61/ߏ8bv<1B!5A*U}O\2cT&A3V ¸G"\.}򴣂 iu%ѴϏS)=te3AY S<\oLH$)?L|Ggdudu0ṰtVCOzPyG=wDw"̰G w kyB/NICSZY9l$X, y+^}j/,9W-q(zg$F)9gwuxn6.-6Uy;<87 a:RBCcH889` ߋz߇ ۫- OM8[o׷~9Foo[=fsiƟb:U}i`[w{ԷUpۇ' x3vVg- hUNB5bkͭG)k<׃101*%>XU)0Z?Am1+o_w%Y-9FO@b>0Ul3hWfp рV]DI{1%X*~n_TS5P^0P,KV3db__>ն" ˹|$ `i}g.I@)~~Q_j k ;)MǼqu[,Ɩ8lU b(D!H 8xac̸Nu<4Fcԃt2"|]nbߡeN@WK7+ geYUkޜNb 0q;5wcLe)'A~QzE6פ cv䃏1Dgm̿Lmqip E6&;lCw{?H)GOM0NzwL*H#8'm[|r,ږ"4O$K). ZaNȠlLE\+bn~.i AQf/;ʃ9…lS47&m4z="u"%2-;6?o/E,2dw_u0`ɬ6]ts%WTUvnp{`eԤmDMqQP``*P`56 Dk1pPv 2*v5BJhdY;B*x+ d3LH߲kxwnWz *NN% du;C/Va7| ;`ۼֿ]A%_ûgSF*-cEkBq>hr4r)q֠5LI)3x`b?~bqCniZC8Rגn)QuC3%z$`$`(Wр֠ԡ!02VeNb^b,=G$m1 8a+aBb8z VH8/Nzw$><.)ZV1\YsU2K@s\ח4m'@Z;6Q![=Ø[r*@n`j,?ݼI2*{\7,'abocČthg}x"M`!PN[q >LĪc>v߽j0mIw4ϑjuJyq׷﷣~ٚ-D{77O4xHF=W]%#1;#sx:}b"fX})DاliJfsy JhDs#os3QIKQd8?@U5ȫLjc}&\ xenEFr ɭ.2:z{{~f=XoK{EUa~6#o V_j6l4`޻Uc*R)|ۊJ NE Xa{n7El^,ېA'BѪ<+I7J)VJF[ d; s# v>Tj{iVMYLhe5X$5 F*ĺc Ya:Tp OeҼմ L,N/VSDӋ|*rHb R Tq a.P&ݸ 7_f 'Z(J)3=mRDz> Mp[ |}t$zEzLҐ+f$ $2daeйJ*N 1VTDln6)WdlwvY+I˩L#MDkUf][I硾-&'Ls%Qz '5VNwYɥ a<?Ԫ0fc"KX!% STz ߫~~]Fa1βsM$,Zh!]>G裻Y:x +&H7'QnW0LL Rw/pFrF5啔dSVJ$$lA$SOeD>{w0Vi{f" JXJeB@Q QRO'P{6%R*-|X76&JYN C;t"I%(RXij8mbKY($1V`1&ԁ*3ф楳F425f9dlL 1 ؁Ղ5uD`mB,1l}rHb!ְ! H lMV<&AȌ UASoEɕW45r+ LB )z1ETy oLЋX LU 5 Sj)[ [j42-Ku7]hblHcܰŹB4!6P_$|o./ǘZRxT>p3]ZC,n=˙sq4k1FbzcD-xfCo[ʵ/s|5QOfwCt0 [zaҴq=3QR~-ZEk[]\D>ߺqi4>)Ta FeiS 0mRsn~%SDuf[ [zv`|} ߺevHu!XyWi T7p0bwf`{lo4l'ra1bRTk) f2ǑDH+8vTyQvǯzq"`Ϧ],hL~ոVJ^A8 <]EH |'ou嫃[+Lu2I\+7cz3mDN;.v^c $h `t^>dAzƫ9g Fwdz2 K`Xw^}\,h<=<8? NNw/Ap<7sLoHh~Oп_rПlko2Oao_6, YՔO{3XypGgg k^EgT纗 _Jm> ^=?z}t*W*s|fĿjj_8xwzg_ŠcD +UQ|p x'W>rzev,a&}0\Q>p`򡒤?}4%U30Ju.V=NJne `‹ܳ_m/M~GAUʚ땴t/˺ oJ>|][0̰xe݉pKw[M)aO)ӌ^+l~F_vWY/;?EJw ˩_9[~'7(oܸo^_ru:%n9uXMޝJ_Or =hoB q^~b -ίv>@OvFEdɧS(.&ξ;N3OeO߿#`>q)O˟' |`Cޓn=Nҟں;g#7*?6+R|{Dshn:J'z|~t!P1{|Ԙh:i^~L٫4qts7IHor䭻ξgʻQF](yۃY]䷞8ɼtJ־.?Xw+Xu=y M;$y;XqVn=~HΞ/^CO]6t7WܚpVA;Aot&JSڵ`+!K1XMoP~ilh' OprΓ3,i{>=~ucz-Νg[xp/F{Cg{=2/.ゴ~rʶA.F2IbC upddbm;xw2WY{bS/VXKג`4# .'Y~!$#Af'~K[]rNsʕgHUa1p/.._f֓[/NrL-@G(h] YaW#5k2 SwC{ p}_pao| l_0Bs)[`!>X,^Yc j:0k>.V0iX2 x.+! KFӤ AY1Xv2LMV1%9>V:h$Z@k硍&;8";ƴd.-KpV7#"'[ud0<Q۔a-I;3|(dcW`+@[ 9m b/uR&/7=mRBʢ){ByP (j2g|ic%C?HI$Tvk+n 5M]k(\:UBv&AFp"AHPX9)k]Lϕ,jp>_k0N!n\vk%TqcdG݃lQ[[Jju@K,bD' ,mpb&cj" ĉxuEJp=h:Y_pmoY֐Q~)Ee\ڏ U6= /1l'>""nGOqZ1}P3>A?>zg p,h[>K]lNWnXBeVz(9V)Zanq%cJR’%:+`I Жic"j8NQTe֤\w.VLbNRi j,H51 "?)P,ּ Ҩ%^_?_sfsmkBkVB- >NdeiE42/*!WsRW7-ΖZnOlT(tk7ˆ(jXh.uVKhά 6P!6>DL9klcOxrبC>?k MI&x1?V9K{7N&0rӒvVĬtEEyM##8bRD̂L"-tYh^\;>[w8V}HmJ;pY+L$q*A6{)H>0]sOn#Ǒꙥ'2,: d׼^yV/ fbרש Ts+,ͪ%!bAJ#!׬7'K"xV :g({fw.aߦpf}sPur/irwJQ7H~?Vد{7O>eC&Ia>f? ^B&IvID0e+Uz n Bhn˜ĥ0qY0Q1H[*@h,qJK #(Rǀ-hHA r*pFJE-Ŵ=}B>`8.w5Aj Ux.08,tCK3Oo}]KK9$i1 ',Vdb".px,E9G cZL9FѸڝ_e>Ί2B8v1TP f( f6 ?* S!9&(HN9Yf^Yd)F`QL1E!\ab1mdJNIQe#[X06NAL8v&@.²`t&y,YRtߺky6R]7pW)u@uAw&x篕faiفOv~\W>bQWE[;/%mݼ`jU {b$k0?V&'6GN'ͻm,I: ߿/s3 -|r//Po4]Dp GDY6"VN[1d2O$%]-m[F(?xrA:ǥĂQ#t ~0f=:=!eXt( Y),wIM%"S4Ovϟ\Uȕ5,) F<H?WhȜQ`BƞP-a !I" 栂F*-1DF/ B͹kqcy,?޵q#e/,|.. n7`ikIr\3yqvD]UuX*js?~{ߌRhCAP?9GYD:LYV/&x\uJF-$R&5`΢BK ,'5~E,B_I89"4ƭtR- !(i#լK ͳ,CPud$xVQ'"Ѩ+Q(,eĤ<xTFt)$#cTv*UEJЃsq8Ua0T\)wDxGQj mv'9ัM{B!(Q 53 "qf w1U0C+)ZS>_\ C R4끪J@U)@rG[`/>gaQApb>BP!hANڄ[QVV$"blp*jsOX97KdjJ䜔Ѕ5b(@ !ew2XcD茶@:,uՔvԡb _QCQQ@s 4 KTD4+3a*`R3#UR(Y _@BOWY4:v?\<˜ Pi;*ZDpS4PXo亻gu^kBHzŽYx_ϋojkca`K%1QE07"3:z (AǙl.7*},n5;yD)NGzahX|BQSMzv/ #{Tv*LBk1d92LT啊9MQrj8ո1鸹V.To'\IFL~WLC>07*Xb''J8J>9NҀ^da|$ԂnnhBJ5]e:l!V /Y.(ȁZ.$Z.tJ s}\Z2XwhΒQBRzm)}ZdRll)"Dz;f,@nESFo?~zPO>7Ksos뚫_7?,zֹ}Jp^kO-v;{j/S E]KI Wn_VҲWycS;Yt>(I<[Ufr’L=NU%JR].l AGxM` %,@r94ȷ]ZG\yo煾7wdƿJg5@γYVb9:}U2/5(g ٺJDCͭ8PA^O] KT8KTFOd*Q}GqtxZEiZ%K9"cz ܄xiXKq%dV@ - B"% ,(h |*fq6Uaa:; 1|#'KBNM"܍7z}ЖTQ$Y'םC.w"7Ow{^*3JfxS#ɼAU4gW>[_pfouַl;ə\XاJfhN|BBǤNx;M"6Jhghxɶ7]!"U2Jc/DtDR@쁍dHh*x'inQk.b_2C K_,ވ=,^6(/? AAKN^F,vTLzV &=5I6eȏ,2FD -ֆF.XhZN)eX%34؎0CO!jc< B5iF8M_2CŃi4BR;KR̡b_&E*#םr,X8`d[CEy͡k%AJ WJ%3X<3,>Z5s._.41aJAb_2C+nu.7:r.7:/d7LRMM$!T ,!ύ!xFi\aşO0P;{sNUP͍v@a[Wr"އfsԮ ,5P{1ޛ5\(]ncu`5  P\Z$@u ڼs\qΫ/䣧ũѓ3s;ӧ"\潭j/õj ,NSÌ/BYT Fc<.odVNFϾf 衣 擻8oV%͹<޿wW4 @t?tmi /O! #zuPFf5`fTlX1sl 'p{yGk3<fphpjpNtw&tŎj^dpc"czh\Z[h|GF) wM-JR3K}'6Nη f]ļMi' [h\ Җ-O;O}^7>fg<䧟"t9&&CEq\LSGńݐݨ(0h468eEڍa0B}THp "gVK͜Nܡ&)ynSnLXvW*Р0;;*酱d^lxM}gGsDQ6Barv4Kl9-wݹDyD]qs{{͇"wcΐsA旮rHWZ^qCpRֽ!0T4x'!B&tǓ5NyrNǓS3(Dk(O5yw/ z~Sg3ď9 ?ɶ=t!z3VxwT ȱEa;!meK"F4) ԕLOʆ' !~rAuYcm tO@G;1gȵO֞2pː~69wkzJ6}h9=0%?0EwFotf3c@a: c1O5PgăSϔ/#OWqP2*&PMhS SY?aN0YY3*&Z.a"fTL>AlQN?.y_uկ??ѿ񪻺:1hy:A$t 8" .;szzgY'vK~8?~wOtWZ2`а7!hp"| Q9i!LC`S$i$$"9nwǟENԀ@%l'yc_gfl\z旳?={]ҿ盈o=v|]ۗ T?!6^$*KI*,0 ħH-X&̲1PD6:~32yD.퍗QD%(L4<mQ1xO)kQg">_h} ~N:Z) n %X`PSF fyé1a5kͮ 00Db 0JdTr](Dܳ9>#D$RyJe``rݤ 1he05jU[Ѭ ?`h}h@7PMb$>>Q@٤(uCB,ymhh3KyB/UH׃x69-`K 8yi q HdNx/3&x;.ېxqAYJsiɗL.izhj9`D  C$FWVOLm g|CMEc2yIG{j~Y*9g(a~h1EHTxu"&j)}]s/W#7J׊SFE M\ ;"ZGn,ztW8D-]T4TJܔW#>~uiҮ^|LIdCt _tHחOW9:0"󋬑훳6koy},@V1E6}2^bťNI}^Ƚr_Hp݁4ҠrN8Y~fD!p>b$L{0a[QIZdA;S[~zQ2ӫrT-Jw:!O{'cǦO cS-$7.U22pXJˠ[c,SJ6DMfԧ1Z-$7.eJ* Z,2`2lĒÏy0ǖ[ ߬v 憱г9EV ;1k5̯lji5 ,A4FJ緇7?w8 sWtqTc}uΰף3Y}]1BE%wՠYUacfNϥ{a'f*Oҝ Ywc3LJ§7G*=91}cYlrQ.x L`X_O4a 7p(Xc,y._$h!f(7 㳉rrk^7[w [{/jeGD;k鑣>A!(f'<޽`pjek(8?H0nĶ^S݂ںK3GlSy^)&bf.Ѷt>m}I ĭuxVܚVIRCsCϩ \|Õ5bۯçje:vl] ]u+S]/J9FMg3 M2um|6s\ۓ ]閞/>VܡX *!3hE|P`7\VIrTY8ҽh5SQ (OGE ҝ,=G|c@j*OA|η)$0lߪͶʼnx,s8?xw/|zaԽ˛\-iy߾PwP~l=rgPJ]`=Ak :@tF*O*3rQUlVMJ g1HAMB?N"fMOĚ::Ě)Ě9 O/Z?œɽHܛ#tjbYMi蝍_ !6f 'nnT3nxP~4(O3ʤ&w/9ޟĴxVU8d#F}PRVru UA)Dc>ios\Gۨ3 7bp7@,g4]i\p*@A3-~*߬1p$8MC[َ VN$bJL)jePvuS蒰)jSJ_'>5JBZ?269!ښsh98rѫw"AmI9;?b?~wX_Ѣ'Rg$l] 4yoڜyy_{\i쩢#f#1x!I[N.ox46e{G[|VѷJX}Pa8hKF{&<֬2֢۫|qٖ>b{ZY-z}ܴэ6/M 1/ xݫŻ %B,ٸl'~tDžX;}KUg~6o qF0&-Sc Jvt~6x(4H\߳4B\xRԼPu)"EbE1! h @t. ̊9#a7B̌ib DD<ԛm@e-v/I?s7ߧ2WdqW+(^89gi"#zC s e`'G0^r* @;گ܄Z ٦nIHv_{.|Z16viY:~'9PCx,갢`1F ̀қbBr;1D:&@N7(u"5NguSD8ځ eHs`P8rAЅ dXac`-+88ꙫ 4 K4s1Xl>5=+lq̽1MS*|;agQ^$%/g&R DAʓe; J!M왪3 urJhm 1"f {143GtdK0 )&pX`dBQA\GBJ#L" 23P :o~jŽZkc0ݱ|qAÇA'sI!tkpjLO<`/^2Xɸ 2(Hн8íj :Q6p<< E Q`\u.)tyWw_,wWƅr+D(-a#!J믾B(:!n3^KMX' KaD~ۍڼ_/b՞XڝZ|7i#'/5uk~7xl>}?_t ,7?.n.vAjO{;~lN;`o%{kqbnOٞ_Ua~{pflzz{;c!pM)!g;n$zT bL'!muuSyz7$c[jlHq~&&ToW8 yg6H Ԟ`VZq@Eu^{f©Pð F- JOw.2껻wLqFMgKERyaK"QAbV,dK!N%M4Ŧ'0z7.fK tR&\H*HhSrGc[ hMё1C752B 3R/v"e਷LX.ihPy<=/5F#6Rj cJ@alE6V`(*e'5niU RK:CU!SŽ~v*˂u8njmAoZ%`wR4{ې`6NH,.QmL S:)g;!B1IC7C@ !GPt>Gg$'gwJ0hHؗ5 p #.8rt 3wbUp 5$ƶTX~15<Td|ܕ:SC H{7AA]Ye(IQK}15wu>^1&Qݫ1ZWϋ4x%(`Qv[ʽ?.gPG9f 0)& M;QP52kЩ,$=$1З2U&%* ,QE[`V-KϿ?xOdPNa^dsKVa~rď, 2s4} ze`C#aM# 2Jq!kɼ>J<[x)}ylwqd= Y)He UAY6==~mtuE#C5J;7DJ0\ w-?n+ v't#A ^#sbޫn:6>Τ@WIzOz0idR{_Y!IzNN="q諞/'LkT< * e>D T>ȗE 5 YQq|4*$('ةHG+ icE<<(^ !A2׈Ԗ;q9&+6]Tj~TPL Cfݽv4ۼ=ɿ}hO)G>VsQm/<[CW2bM> 9xE31i0ThSl^Q$A@YpFui&#{nB| dZȽ @&TZgGQA H* {,4PC8;Jq׍sH\iS qE=#j`(a05Kd!dƞ ga8& 30d,;$%+OkiM4h8!FRjL# ae)ܢ0a821r>#;޺2@>K?> k~v_LZDFJp<]&yl#*G37 5KRqm+GWt>Z@8̚Ĥ-LmcHfMSV/opF!2f x„`>+41gTa?eB2+4QB3Kl%uH3iNL.+E{rYiJ]s)Q 1k Ly,uLo vpU8N\ hcT=Ľ,+~xP p 9tN*LS_o9)/ վKQB ؼT8nKMe2@[1Χ'&ʡ񒽗fW7"Ń @L& IDEOcNJjGDy1zb45Q!T Ep3’0w:5Ja9!` cb| b$M+Xi -ضpJMBGB(ɔT+  !c^aj`vR}$SxaK0FSB @L2b)w@VRQ0lh4o (2P/(yƶa{XːE(­O ,C \cQ [Y_8R$/V?nnGM~&ھLY}׭=EvS:1q:SCs2w>m:SC_qda|S;Z*n}l !=qvQvNMˤ9"٨+e"4<:Qgz#_6dGjX,0{..qw+YK;*NNI>;(ͦR3hrT3>3 ˜~vSi EMm=wЪ]7qs |gUs<>mOi-Px3[_߬}3\)u~hq{\A95>tZ%;in|IeVynG/DZWme\NZwS`fݏ5[L=zUh%uYSSUw qE[9GXw#,'WbԊ|+f5 }{-{"L=#QNLz*9dxFE'Nzd2*8Pۊ@U%bd:?VH#g}sXO5^OI1"U[;+gW9  3oPI}FEBcwE"LPɈ`SQ9w8/ %c!;-*BJQ8A&"Q>OTBKHM2z'"T˅*XI)A`͔Ȋ;R}/i5c"!C;uQO!0J\*&m (N;]hH-f*)R ʔ/5DIj7J 9%L$qsf)J+u`~+0h1ğ.zeEq3*%od/rBr#J7- -?Ofd>'76q Q %WLI R#`O,0EL1L!Sӛ)zeW?]+\n|OѮxnl2˜7=_]MGC+E۲)[fF~6|j 0jF @|/Q[Oe?s"p2P5ݏ4CCRO|qc 8qP'/tTJ'Q9Y^[znWQLiqW )ɰ5&ִa{3!E P1"ެq<(yj|P<:6)RlC7lv֌1 "[x`ZF|X2"G KA5Av^M58n&wO3oY+*$a\Y"mܒr|8)Qlp0,f( U9%/B:{*wQoep5>y{ G7ط'b"wШEʙ|:%:[{pQSX\.*TweH t^Rc:U\p:/'A=n7 0WR Nu"֑ ]! 1ήF̱>e 5$q4*&RWfTp*blT41uVǾ¾ZKxWFk+ϕ`<oO*wLh|keK6DmKdl3n·.!HF8@.KARB$ #F&A)cTTVRsmGhpA\? Q <R砆zRR&"% `ڹ˶Rq"KpI t& =-x;aƛ!Zl7=O*&0-Ӻyr{%oV1Ԃ?t?[%xuԸA K=zT0ؒ| :Ɏ`cvMۓO.[G( 94ĘV[hHА.Є7 Ι?bAFb Kd2FV{B b1 9LK[yT~Y2?㥕f Uy>"x5ϡɶ鹽*O5#rD\#{f їV'6懳$rd=,sa)Fe[QQg¬j)M.&o[OŨDdQf*.tOGpD~ 5~]Ts6'VO‘TƯ풔wJ%wUQBa3Y1骤tXzNq{ʾh[]:{I*,8==숉ǣ-8)kDnԮ+1e:aggQ<)רt1XNw5&" 4dl'B{sD:b:$HT"'G4iƕ$>yjU AFɂƺhm'\  :/v! T9!|r653c R!YC.Gsd 9>jvWjnICm=U/ѷ&zx݉ hgĠ}(5|@?!WYmŴmu6FӤj9Oy,q~a΢wl%z.jr䜐{>| u7ʟ&uNQn?now&)[ܦ4B~CtИjgM޳n\lP Jwtn\g|26Whj4DONDKM".С&SMs#@{2ԃ)Vb=sXIbP2MJr:5tdpGKW#w>D)IonC1(:Cźu5\Of݆Z#w>D{z';PRzP8XUePbG+ReP:u(&+wNΗ@1*>yXp=%(("2K ]wI$6KbFhr4B$A2iY"~44DqUWbÅ CZۓ&\*94 JE`Hr~͞;-2#[jm Jp&]oGW}Y X$6Hln\`S">8W=!9f3tNhXUztWWш1\vĠ,0ݩHkQCX`Xa R`Z*pϢT2E`a ^"A5GQ &tq %`Sse㐱6" ORq>x xA Hy%ED`;! ;ՊFF~ 4iG <8UT* ЁA $Aʊ `# { ;a0DFEuON T,hEpґ5IY%7h8pDW.Z~+I?a¹duy0ޢzC#OM@θ,n'wl >$ԝg>ۮйn]GLf{ )[?ܤ#?3M(_y \]'m+ae,~#dD7)@Us<*[ *#)[e O׵Ԟ*[~fwX\ MU[=۞wW4x] @,Q^HRoRYx»f*'O'CTlLx Q\b% >x$`f Zbe0b~EaRoO<$kXQgav| y;=:(6 |^鲬AhRj؃l1]; L ͫ.9Ů-sP%T/rU$_*7@9UB$=UqlЈq$&P2px0R:# 8j]Q#`)="`hcQ<["Frs>L0|6$h7ZiMҦ'i7yAzO4C8ft^9Ǜ?l$'>E1BϤ՗,[jzGkd6=`$= }\o!Sfnc==8dj -ʋX3 C/JEAjWQ-h1HSal{HR) U&=Y#:L0F49pT)t; "_0Pt}Fg/Wt#+=E8kp4BtJ 3Ҳ.VGpb%b6Stl;S] _!/z 6GDpZ/نА*]wYwSw2\wӺ byyL~ˬY-&oe"Ԅ 9_ @lWn[ Iª !+_P=Oje嶂VF<|mrPr6Qi˻q%vU9MV5| ;W'_}`z=^B`9< j|(`2 )H`́҃-pDD',vZ.TkwCbPjjM!aQETPVs 7F)6lV3g E 0 ic((fL$ %!J٩!+ߖJ`*p FćCWF0\ȯbCh^H@t&cX*ɳ] Y 0# l0" .g5:޽2yH00e@$#"". ( %&'lVjsViG}  ްtiV6qƢGWVnoi>b k нFO(PW`HAGi6 E Zi%cB:\HDaS*z\\ 0¹~aHk$k`F H}M XFF0l( Yy4,6yto,^j@bz̑PC$*b , !.b1 rQ mǿAUѡQ `W'mJY`lm7FSiNUEf^#~)GOW<#@{ YװWy:bk@GҞS0Ɨ.Ymż^輔60v,,&,`[jO D*<(hglA\\_Ml >Jѣ(7[xI\x#+(vjc@O CZO4 F).T0#¤1<%whK@*. !j],jࢽ:h W O9 -DcKjҫr3U:쓩 IwsHsA &ę[*Álk&pn65P{Li߭~ί^5P(}{h[K-) ~K}L8C(d C{ _9z٧T_cM%:pRP[(jn?vܬ7@o2;(Pwf";0p Cl^ YKv^ T/W8PjpxR x6K:[l;R3l6F+Ăb.`KPG=2 #â˅m uY\?%@eTYٕW-M륖MFbdݣQU.1#[N!둓>g`9T}n9ZŦ${ǩXu6}{͘:!&ՑqT 5[eWzZܷ̥h27YnaLhɫ<}E*O]Yw۹xUs붹sqL՛fjL*'qU"P~ Mrt-"\ _ʏ_;͗R\'ߩjQlD' z5kuN#'%1!wx0+yjb$T{$F~ACʑL틜q@)ا1" Q (ɂ4I:Odn+UpC85"&P%> [؂A ,6]owK+>Od= 4g5zDLdl(M x#],mF^J[qz\Z >% U'<~ IǓsb2 )o R$di@/V~Ijт`̩g2H!Tigό-&## - SlPF랟 OaNLu!$V8 n֮!j ZKPRZZIK]wòڐ zpEA#b&Q -"V)+K&c2#ԍjǔEr$P@J!CyD~,b6p%x)C¼z Eeь#Dhkt[K<-p$m;TY ;fTs9Zfnכ*b5~ πҌyﴉw!L7W>Do"#r.ZG3V ` pފpQT޳?V21ɮeI5Q)Q*,[@ $#EjtgoGQD#Jrn^B(m>e4C\x_L#D$LX~ ^1h)2yL lZs%EZ#HǍoUCjՔR< 1MI%XU QXE0C C9n$ /cJxcy oxdbEXV@F^ MG- eTW1'`*ֿ}1K-]5ZhSP3,Bgܛ#Ѐ &t\٤gtZ +LSm i. W?u1M?eRy!VHb|4Fh |S3^Yh`OM̈A?#ZJ;Sz>>R`%_M{Q],]t6q~>^b,ݏd6/oK%QTI3tO dyH?ZJlj…4 %`a؉ }_oѤ4LeYK3YXAS ̇d`>$!U[͖| !5Ȱv*Yk8TrH3Cv~ZWkAe$NRv=I=Wa]P~ş6gNʇ~mn&3*nr )i}ZD䝦z;[[lff]w3T)A/T.Z;a~doCuW.Fx_Ǯ_bs?R0^ɽ#AYgR$bN8]Y I:#ʒ[-܂}jSo+ kJu_o-4a/:eq`u@GxƑ$Ac1&cb}+)M V|4<يno4俶kM{3ER k ʑ{E=W*!IA%M O]auO>uZ[V)}5?%\kڣ[)eTsZ8T[ Kpk6mngZy/YNcsV*&^!~xɾxyw2 QpL}޲|Z!Xsێ6EmZ<*x߈ܭ#VoBT#Ѩw!%PJu Bhق*A E;,F۲Ū+/uZ5dtAQZբD=P;SQFʢ3qw WɎg(#RA469g/4`ےcAF]LxzØZ0,4uM=(j ~yPH?*㓥6Y04f|=K)&(3$ &{;"˂[tOHߌ,NO[;a$(–ڢzdydz.sͫr 7衜5IO,g˕9U=\cP໚ .?掋\W^0mK'v};/ vN;TKxQ% /̙.))+]\miqkvk$Q7#ʆ}{DJn/]{;sF\pY]w -xM61 s Ze`1ĒVQ)`)[l|nUSP{#-/*E\9O}jtysۼr;#cݫt v"X;X=.D4Қnn+S̩n E 》},9h -:,آ]ͯ.[.xqW2RHYlcP3\`E\f k P UOmI>?YCtֆ⍕›$"ҡ*;NEXZ/OL3< R,:ޛ*ksȠU|'&j_ Z:ЇPPS!qm)a3~>c^ݗ0BLXCGA)CEp|wȱkɇrlt7 MXƳ.V1@0Wm&@B6RkЏfƥBTt]ԦI.dϽ摾,TQиݹ7bG.vfI7按;f w0OA'I!L/aOw͒o!F%&f[PӮ b(Np10T_kѣ nb,r}BљA4Lf)h7o }pjg<ǣ7mp Don/I,xDF/z I92Wd6BA_J-@qȦYS|Y)ky"f[xDzoj#^if*= 3n4DNH]Ԯ{DXϢjj@?{׮|XHs_g ~;H?{]b,|E=zCAfO,|{,$$W3[arF%L}ɻ㜣ڝ(:{͋k,gn Nn/)L01̸~#aGw_86^&[] ^D5I1j_Qx'v4~{>iG.4OL+F[e361D$#ʡ ;; n`F?i"b'(it*3u(b3iGx{z-is?ܬ)|3m oKC~n^"#3*0o`VhkLDL Ӓd`,X!!x яLK $G %J}bqn칔 DT0(_]r iDdY$M.D4 fOl 0݅67bz[T6IIC/e'J y ƴk&Crkfo,lshI. yL$E zC1qYPX*NO}%ë|U?::Xv=F-$ [D.4Lֆb)&"3M C6m}DڡfgQl.03#&IבA'k8ϱ-[9›>)yîa4h43:n@O6?;͢p!/``ޱ}' :}vy3/ ybNZ[9@@h >(3_uIN)S4Ev8NRKi|_)<= ^@\p4ջ͢Z@#%1O=^F^y,hwv\6l>Y4ܑIT6@j~𶤁W#OUX2]3҅^Eύ$'zO>D#KU@:b2A:S _eZԑ8&q$@Onu$ ̚VGHeSs ȨkJI _I,MW'ӝ,it'ӺeG[Kt`dKp 3RJ%.&')1]J.&jq^qؿ®WSKY<>S@ ^Zr~6{&]g]~?IK)M~-i!f%.Gŏ{J1|mCXt8xx1H9?fɄ~:PI ij`R-)օiCqXW(3=y4Wq_~* 2}3˳KS;NU[F&pypJ *y)k!KG gi6U!_33Zr;%7>69gNxHE >|r&Mֵ&c%p<`+Rm1H﷚Y MQdWOU]]}i}*S׮>o޿?g\eFkD[JӨ^ON2*N&?ͫx/e=VO=qM*Pa>λbj?zMlnHtaSY.R+kzn x\ONȝfJ h!AϏd$j@AHdK1OZ^*) *ATRH?4ֵJwZ=#5 \5 HG+k: 02]dpAU]ʶoE+;]nӹJM=|9}gZ4Rb޽OMSL9NR2ס ^Uo PXڻnN5=k.pΊw޵ht$?|S~9rwc-etDk+kR T~z7ݳv'lh[}mnd[n,nZ AQ_J=k=v t%7V}UJ{H'}`kgrJc(Z뺲պ/} (FUL+m۶l+,I/5nLqHl˾l3tUXYXZ"6k*ڤôUKhlSڬ4ѩʦ,s]E0 mP rƛ'[6__,EaA eh^HD@-׿_Q4wO<?aXXoq 覩umuUQ􂕦iPih-@l޿~7BD]D[cX`t.86#b=G AO<8:b&@F*AH DwQ]Zqg:S ƈ6MLbê(V{* 'ڨZLe[0eE4vkn?>G~՘nJn* }^L+2qtN@+}XǡmQXM trc{ Ě:B AQ%r`/&E{RU,HMJZ㢱.e(OQkCJNٷh_{Q5Bv0!lm7UQ7b-"ͽH0'u䈠q D;͌y=47zC"B SwBi)796\ܝ}[ɪ2U&mji7U復|٪jw-KIѳ׷JG+1~MKDmmp<,˩O 4~wǿx2;'fOlTe;;t^Zzk% p#g6٫ҟqo+XWG0g)Ó,uJۘ$.Ҿ=}X%*x>rRP{1{p4eBN Y,ggɞ{={T)1Kޫ[Qa<'B]_Y3&;0 L`&0 L`&0 L`&0 L`&0 L`&0 L`&0 L`&0 `y&0g O`&0 L`&0 L`&0 L`\y L'0 L`&0 L`8gE O`xe[ O``ށ &0 L`& L`&0 L`&0 L`&0 L`&0 L`{~0A&0 L`&0 L`&0 L`&0 L`&0 L` O``TL'0 L`&0 L`&0 Lpns c&0 L`&0 O'}}y< L``y1&&;0Ay&0 L`sQ &0 L0`< &0 LΊuV Ƙ&0A߁p?NLw`&c2$O'8 L`&0 L`&09+L'0L'0 L`&0 L`&0 L`\zyk嘠&0 L`{Ǿw&0 L`&0 L` O`&0 L`yy&0 L0d.kt&0 L`&0 L`&0 L`}t1A O`&0 L` O`&0 L``.\\:%}}&0 L`&0 L`&0 L`&0 L`&0 L`&0 L`&0 L`&0^5@ O`{y< L`&0 L`&0 L`&0 b< y|ZOdzv"<:9Le,6ٻ =p4ɨ<TO&uoiOFl8M?ݝQvw#YwwV_٨vv^zTYK e\*Wަ"5{U3Mҿ?Qz%wG\Òzz8`< VX’+\&0 L`&0 L`&0 L`&0 L`&0 L`&0 L`&0 L`&k L``38#O'0 L`&0 L`&0 L`&0uQL'0A O'&0 L`&0 L`sV O` LkÊma< L`y&&0  O'0 L`&0 L`&0 L`&0 L`&0 y&0 L`&0 L`&0 L`&0 L`&0 cb< L`=S1A&0 L`&0 L`&0mm'V L`&0 L`'8?A O` &0 L~ Ƙy L`&0 L`E'&0x&&0:+Y'cb}&Q{81A߁ L``~jSX#Nʙ ZkohZU:U!|BpMt)W[/o`ϳsUʶ2 ɎɴkìrvlSܿ>m1HRpzSćuW7M_O_v͒jxZy/ݯܞYm8fiJA0a:E< mL9Oo}И_[߾z/P͇`qE9++KY>;RV: *8IJ=eULL !Yd(ƣi!K:h?/$N㣣񨫁,BBI9Gd6 ~b6><9eNjӴOR冓? y<+dgq:ⰺ_t۟weX/RGw%c9y1~7ʦ'uՀO$fOqO[J9oQit_AGT}u<1ՋٱŠjv88t_C;uԭt_ۼ5g_|W?K9Gq&g\7spzslW<e:OEsГx S҂ټj*(Oo\wKVjDtퟎNv]7 g]F={c6_ح2jOF g}}*߽o^4:ܜ=ԕwy#G.Of㼞Nx/5?]|~ިՄrwiwQ9*_Nmi%q4=;gy]lJTa_OW"<-f\9Hݾ߽vXZF. 븆h{} qg@.7qv|Xq{K?}ёHYUp#ҡ9t8i۪m|.6 d9Dij]/}mѺ4277kH/FSJ;KoMѧ $ep~_{5-K|D:q,2 ~0[sd%Z'(EZۘ< ڰ{ 2:Ƣp6FU|ekV,k_蕬TLlitVu]F֔b"sGqzq0Ҫ))mx|{j.ieY}pӵsS|sWpyu2IxuoA!wmo]~ۿ> ǹhꢌR60FWǪ(JjWkYniG "w?2K뭙?v?yxMwP6:|+7i2Lgg am;4 ҹΎvZAZ⚲a9}X繁rx %b?'Qb7>O0Buzaz?4u^7˞mlh!:*a+wgMCi;[블HٽҢXgGEOF<~UHmF/ZS\o@&?}W  h/sWb ur2q1;u;uPbft7|~d6z8M~O76+wʧi^u-=s)z{foʛߦM⼤N^zZ Zp-ۼ,4T+n6Nos@BҮ@ 6 gzau\̽Z_]x6wV+[~jv%>ʐ =fO[6p.Lxb~C߳|b^3k-+OS%eMS^*7B rY‹FOޡrt\k6I#|7ljo\y MV,ib2.uJG:;v?clajv[n^קt4~2Ű:6wv[\jVkb}<a︸gy8N/th: tG // :C7>_̛`[_٥<]I&!_8UL)o}?QvǐVA:~"^]pom[1{fu‰b_]8?nFGVA:~"!gFom{fu‰^3ftB|лhV騲bJQ-xRo/so}nۢ2LT]K]VMcK[&6:MPuxx囅Bf=\*W< '<@^?gS9k}&JN62]tjQ4M UQU&zUG1=cFb>z׆O}>wmH3,v(Z\EۢނINe'" EY[Ev+p IpSn%w$҅E葹kAPz̜̼"h3YU6+$<$bVm|Pk-Jߛ\=D;r#Zw:"|J4aDUQ 3А[SI,1^Y+l:k^0k˫岔$2k)L/%)^.k=g)?h+GK&+܎Z­ܖy'*˕˪(%i }4F8)J.kւeUFu/Y^rp/YH urLTڪ(̻߶)ן9ڸv|g}rGhm?~W7wwwz6O[2m+-%rquXzC)!mh7w٘9ǍdPHZ|pB1?_Zz7ka+Owdg0]2.w₴St/ybHt/Jn(K ܶz0:1%\# Zw":XlznBN4E7)K'qK |,^zn`]<@VLBjyrglaS@$L5Z$[X{D '6cJf8UpB!)7|;@LB8!,Vz=[*=pB2+B~p>ǔ T>/7W2x b헛x)&݇r5Ջ-Qu RR_2 =t)!M kҞd+)a%:K`٪fY9@sTB}RXqdaդZ3@+4 FňTq;AiDor/hPFx(#nJʟcj,t]n0ټrκؘ #*q>}Xb:cj8C;.;l%K>T#nsDm1 k2}mR*LqwВb%1%A83 ɕ1/#cfeuSa̞DVS=ZPmTSB~p&a a9y+0aBvK:ncYp5ef>C),vU=s?X]\6N![V;rox(E@IҍMR<* &$PzEjez()5J5IT3sk ؋TWCX_W>%xpcFnfj9 ]/g45vѲv=fm~{᳿W3%ŷ'+S ;bOj5O5kpvQ1Q[s8x¢֯V?EFĎQ.)( JC'!$Z.7朗" _H c3|&R#$FI[?%$qĀ!>( ugxR@"|T^c^ Lp1F? xo5#ZS`ÞsB$y4\"F/dh䀤OesmbOKs!a!6rOeM~ f'nqE߬p~&n!M%}ys&~/dp*wo_}eϘ1lVP+?+ngj(i'ҾwbGH_Nҿzvy)My 1vy>ע$ѹ:i)[LH9Uƛ2>cĤcm8[NAH;Cڝa5CʽymXhCB%O6Ӊj.*4ДBPF#]^-/]O݉l>1/5>ЌYs3E@4SDcH C}D)8eĊ&1kEo)ɪ5=I]_^9̡PNQOy창fE:O4J l9,+if` t J5gN*.f]DzP92"0㷽zѶZJwi{=Wx(nΦ)PXh[m\ 4$Pʢc=ZmI\HRk}]K'ԌRj$>\Pz(eT,Q(E]"py%*%0s>̟E0xkBӇd.]0%ֆg^˂I<[𞭵VZwyْk6]_:M^6] J y(Y JYh[m\8o|rz"g$P[G K??wvar/home/core/zuul-output/logs/kubelet.log0000644000000000000000007555444315157207562017730 0ustar rootrootMar 20 06:49:26 crc systemd[1]: Starting Kubernetes Kubelet... Mar 20 06:49:26 crc restorecon[4810]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:26 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Mar 20 06:49:27 crc restorecon[4810]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 20 06:49:28 crc kubenswrapper[5136]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.162268 5136 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170440 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170487 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170508 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170522 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170533 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170541 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170553 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170563 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170573 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170582 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170590 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170599 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170607 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170615 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170622 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170630 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170638 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170656 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170666 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170674 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170683 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170691 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170701 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170710 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170718 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170727 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170734 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170742 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170750 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170758 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170765 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170773 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170780 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170788 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170796 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170803 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170842 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170851 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170859 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170869 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170881 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170891 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170900 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170909 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170919 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170932 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170942 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170952 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170961 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170972 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170982 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.170996 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171011 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171022 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171032 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171045 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171054 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171065 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171074 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171083 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171091 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171099 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171107 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171115 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171124 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171133 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171142 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171150 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171159 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171167 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.171176 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172186 5136 flags.go:64] FLAG: --address="0.0.0.0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172214 5136 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172234 5136 flags.go:64] FLAG: --anonymous-auth="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172247 5136 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172261 5136 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172273 5136 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172295 5136 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172309 5136 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172321 5136 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172332 5136 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172347 5136 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172359 5136 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172370 5136 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172383 5136 flags.go:64] FLAG: --cgroup-root="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172392 5136 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172401 5136 flags.go:64] FLAG: --client-ca-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172410 5136 flags.go:64] FLAG: --cloud-config="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172418 5136 flags.go:64] FLAG: --cloud-provider="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172427 5136 flags.go:64] FLAG: --cluster-dns="[]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172437 5136 flags.go:64] FLAG: --cluster-domain="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172446 5136 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172455 5136 flags.go:64] FLAG: --config-dir="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172463 5136 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172473 5136 flags.go:64] FLAG: --container-log-max-files="5" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172483 5136 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172492 5136 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172502 5136 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172511 5136 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172520 5136 flags.go:64] FLAG: --contention-profiling="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172528 5136 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172537 5136 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172547 5136 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172556 5136 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172567 5136 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172577 5136 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172586 5136 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172595 5136 flags.go:64] FLAG: --enable-load-reader="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172605 5136 flags.go:64] FLAG: --enable-server="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172614 5136 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172627 5136 flags.go:64] FLAG: --event-burst="100" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172636 5136 flags.go:64] FLAG: --event-qps="50" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172645 5136 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172654 5136 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172663 5136 flags.go:64] FLAG: --eviction-hard="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172673 5136 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172682 5136 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172692 5136 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172702 5136 flags.go:64] FLAG: --eviction-soft="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172711 5136 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172719 5136 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172728 5136 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172737 5136 flags.go:64] FLAG: --experimental-mounter-path="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172748 5136 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172757 5136 flags.go:64] FLAG: --fail-swap-on="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172766 5136 flags.go:64] FLAG: --feature-gates="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172776 5136 flags.go:64] FLAG: --file-check-frequency="20s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172785 5136 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172795 5136 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172804 5136 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172846 5136 flags.go:64] FLAG: --healthz-port="10248" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172856 5136 flags.go:64] FLAG: --help="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172865 5136 flags.go:64] FLAG: --hostname-override="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172873 5136 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172886 5136 flags.go:64] FLAG: --http-check-frequency="20s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172899 5136 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172910 5136 flags.go:64] FLAG: --image-credential-provider-config="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172920 5136 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172932 5136 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172943 5136 flags.go:64] FLAG: --image-service-endpoint="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172954 5136 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172965 5136 flags.go:64] FLAG: --kube-api-burst="100" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172976 5136 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.172989 5136 flags.go:64] FLAG: --kube-api-qps="50" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173000 5136 flags.go:64] FLAG: --kube-reserved="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173011 5136 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173022 5136 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173033 5136 flags.go:64] FLAG: --kubelet-cgroups="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173043 5136 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173054 5136 flags.go:64] FLAG: --lock-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173063 5136 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173072 5136 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173081 5136 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173096 5136 flags.go:64] FLAG: --log-json-split-stream="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173105 5136 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173114 5136 flags.go:64] FLAG: --log-text-split-stream="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173123 5136 flags.go:64] FLAG: --logging-format="text" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173132 5136 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173141 5136 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173150 5136 flags.go:64] FLAG: --manifest-url="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173159 5136 flags.go:64] FLAG: --manifest-url-header="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173170 5136 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173179 5136 flags.go:64] FLAG: --max-open-files="1000000" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173190 5136 flags.go:64] FLAG: --max-pods="110" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173199 5136 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173208 5136 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173218 5136 flags.go:64] FLAG: --memory-manager-policy="None" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173226 5136 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173235 5136 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173244 5136 flags.go:64] FLAG: --node-ip="192.168.126.11" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173254 5136 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173273 5136 flags.go:64] FLAG: --node-status-max-images="50" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173282 5136 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173291 5136 flags.go:64] FLAG: --oom-score-adj="-999" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173300 5136 flags.go:64] FLAG: --pod-cidr="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173308 5136 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173323 5136 flags.go:64] FLAG: --pod-manifest-path="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173332 5136 flags.go:64] FLAG: --pod-max-pids="-1" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173341 5136 flags.go:64] FLAG: --pods-per-core="0" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173350 5136 flags.go:64] FLAG: --port="10250" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173359 5136 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173370 5136 flags.go:64] FLAG: --provider-id="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173379 5136 flags.go:64] FLAG: --qos-reserved="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173387 5136 flags.go:64] FLAG: --read-only-port="10255" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173396 5136 flags.go:64] FLAG: --register-node="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173405 5136 flags.go:64] FLAG: --register-schedulable="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173414 5136 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173428 5136 flags.go:64] FLAG: --registry-burst="10" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173438 5136 flags.go:64] FLAG: --registry-qps="5" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173448 5136 flags.go:64] FLAG: --reserved-cpus="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173456 5136 flags.go:64] FLAG: --reserved-memory="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173468 5136 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173477 5136 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173486 5136 flags.go:64] FLAG: --rotate-certificates="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173495 5136 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173503 5136 flags.go:64] FLAG: --runonce="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173512 5136 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173521 5136 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173531 5136 flags.go:64] FLAG: --seccomp-default="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173539 5136 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173548 5136 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173558 5136 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173567 5136 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173576 5136 flags.go:64] FLAG: --storage-driver-password="root" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173585 5136 flags.go:64] FLAG: --storage-driver-secure="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173594 5136 flags.go:64] FLAG: --storage-driver-table="stats" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173604 5136 flags.go:64] FLAG: --storage-driver-user="root" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173613 5136 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173623 5136 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173632 5136 flags.go:64] FLAG: --system-cgroups="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173640 5136 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173691 5136 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173702 5136 flags.go:64] FLAG: --tls-cert-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173711 5136 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173722 5136 flags.go:64] FLAG: --tls-min-version="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173730 5136 flags.go:64] FLAG: --tls-private-key-file="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173739 5136 flags.go:64] FLAG: --topology-manager-policy="none" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173748 5136 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173757 5136 flags.go:64] FLAG: --topology-manager-scope="container" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173765 5136 flags.go:64] FLAG: --v="2" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173777 5136 flags.go:64] FLAG: --version="false" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173790 5136 flags.go:64] FLAG: --vmodule="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173803 5136 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.173846 5136 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174078 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174094 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174105 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174116 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174127 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174137 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174148 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174156 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174166 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174176 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174186 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174194 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174201 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174209 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174217 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174225 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174268 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174276 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174284 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174293 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174301 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174309 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174316 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174324 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174332 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174339 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174347 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174357 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174367 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174375 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174384 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174391 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174399 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174407 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174415 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174423 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174431 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174439 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174451 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174461 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174469 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174477 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174484 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174492 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174499 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174507 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174515 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174525 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174534 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174542 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174551 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174560 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174570 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174582 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174592 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174601 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174611 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174622 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174632 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174642 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174650 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174658 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174666 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174674 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174681 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174691 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174701 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174709 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174717 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174725 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.174733 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.174747 5136 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.186055 5136 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.186105 5136 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186289 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186311 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186321 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186331 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186341 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186350 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186360 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186369 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186383 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186397 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186409 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186420 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186430 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186440 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186450 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186459 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186469 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186478 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186487 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186497 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186506 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186516 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186525 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186535 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186544 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186554 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186563 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186573 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186582 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186593 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186603 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186613 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186625 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186636 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186649 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186660 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186670 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186680 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186690 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186699 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186709 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186719 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186731 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186744 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186945 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186957 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186968 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186978 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186988 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.186996 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187007 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187017 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187028 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187037 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187089 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187105 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187116 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187125 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187135 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187144 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187153 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187164 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187174 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187185 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187195 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187206 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187217 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187227 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187237 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187247 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187259 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.187276 5136 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187546 5136 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187565 5136 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187576 5136 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187586 5136 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187595 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187604 5136 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187613 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187623 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187633 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187645 5136 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187655 5136 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187668 5136 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187682 5136 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187692 5136 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187703 5136 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187713 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187723 5136 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187733 5136 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187743 5136 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187755 5136 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187764 5136 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187774 5136 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187784 5136 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187794 5136 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187805 5136 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187847 5136 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187857 5136 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187867 5136 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187876 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187886 5136 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187895 5136 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187908 5136 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187921 5136 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187931 5136 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187941 5136 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187951 5136 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187959 5136 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187967 5136 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187975 5136 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187984 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.187994 5136 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188003 5136 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188012 5136 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188020 5136 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188030 5136 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188043 5136 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188053 5136 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188062 5136 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188071 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188080 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188090 5136 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188100 5136 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188110 5136 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188120 5136 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188129 5136 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188142 5136 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188153 5136 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188163 5136 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188175 5136 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188187 5136 feature_gate.go:330] unrecognized feature gate: Example Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188199 5136 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188210 5136 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188220 5136 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188231 5136 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188241 5136 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188251 5136 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188264 5136 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188273 5136 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188283 5136 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188296 5136 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.188310 5136 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.188326 5136 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.188716 5136 server.go:940] "Client rotation is on, will bootstrap in background" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.194071 5136 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.198952 5136 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.199099 5136 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.200884 5136 server.go:997] "Starting client certificate rotation" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.200923 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.201098 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.228809 5136 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.230656 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.231950 5136 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.251887 5136 log.go:25] "Validated CRI v1 runtime API" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.289686 5136 log.go:25] "Validated CRI v1 image API" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.291724 5136 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.297496 5136 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-03-20-06-40-52-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.297528 5136 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.320848 5136 manager.go:217] Machine: {Timestamp:2026-03-20 06:49:28.316579548 +0000 UTC m=+0.575890719 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2 BootID:35df81f9-549e-4466-8b52-0d5376d2ac8e Filesystems:[{Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:9e:e9:df Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:9e:e9:df Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:25:03:3b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:d2:45:a7 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:1b:ff:08 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6d:05:a3 Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:6b:a6:cf Speed:-1 Mtu:1496} {Name:ens7.44 MacAddress:52:54:00:89:18:28 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:3e:48:45:c2:d8:3a Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2a:cf:a3:17:2b:68 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.321122 5136 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.321289 5136 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.322523 5136 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.322742 5136 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.322787 5136 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323051 5136 topology_manager.go:138] "Creating topology manager with none policy" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323065 5136 container_manager_linux.go:303] "Creating device plugin manager" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323504 5136 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323535 5136 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323733 5136 state_mem.go:36] "Initialized new in-memory state store" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.323907 5136 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327365 5136 kubelet.go:418] "Attempting to sync node with API server" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327388 5136 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327409 5136 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327422 5136 kubelet.go:324] "Adding apiserver pod source" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.327434 5136 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.331683 5136 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.333017 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.333093 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.333160 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.333256 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.334310 5136 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.336883 5136 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338106 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338134 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338144 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338154 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338170 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338179 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338189 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338206 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338218 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338227 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338240 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.338249 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.339041 5136 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.339523 5136 server.go:1280] "Started kubelet" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.340551 5136 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.340571 5136 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.340977 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.341123 5136 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 20 06:49:28 crc systemd[1]: Started Kubernetes Kubelet. Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.342652 5136 server.go:460] "Adding debug handlers to kubelet server" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343193 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343298 5136 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343733 5136 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343745 5136 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.343854 5136 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.343872 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.344356 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.344426 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.344480 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.344855 5136 factory.go:55] Registering systemd factory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.344870 5136 factory.go:221] Registration of the systemd container factory successfully Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345183 5136 factory.go:153] Registering CRI-O factory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345194 5136 factory.go:221] Registration of the crio container factory successfully Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345267 5136 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345289 5136 factory.go:103] Registering Raw factory Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.345305 5136 manager.go:1196] Started watching for new ooms in manager Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.346782 5136 manager.go:319] Starting recovery of all containers Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.347380 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362649 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362717 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362736 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362752 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362767 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362783 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362798 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362832 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362850 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362870 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362885 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362899 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362914 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362934 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362949 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362967 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.362985 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363000 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363014 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363060 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363076 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363090 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363104 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363117 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363130 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363146 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363186 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363202 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363215 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363226 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363288 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363318 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363351 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.363365 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365046 5136 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365102 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365123 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365139 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365156 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365169 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365183 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365197 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365209 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365225 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365241 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365256 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365270 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365283 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365298 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365313 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365325 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365339 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365352 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365376 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365392 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365407 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365423 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365438 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365451 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365464 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365475 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365488 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365501 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365515 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365529 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365542 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365554 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365568 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365579 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365592 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365605 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365616 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365632 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365644 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365656 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365669 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365682 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365694 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365706 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365719 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365734 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365746 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365758 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365772 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365785 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365798 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365831 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365845 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365857 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365870 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365882 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365896 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365908 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365922 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365940 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365952 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365965 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365978 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.365996 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366009 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366022 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366039 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366068 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366086 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366105 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366126 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366145 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366165 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366185 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366202 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366216 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366230 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366246 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366261 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366310 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366324 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366336 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366349 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366361 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366372 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366386 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366398 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366413 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366439 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366451 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366464 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366477 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366489 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366503 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366516 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366529 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366541 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366552 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366566 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366578 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366590 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366604 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366589 5136 manager.go:324] Recovery completed Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.366622 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367229 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367273 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367298 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367320 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367339 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367358 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367376 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367395 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367414 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367434 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367453 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367472 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367491 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367509 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367526 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367546 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367566 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367586 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367609 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367694 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367713 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367732 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367751 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367771 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367789 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367808 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367859 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367879 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367896 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367913 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367928 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367941 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367954 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367969 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367983 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.367997 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368009 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368024 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368042 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368061 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368078 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368098 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368118 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368137 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368153 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368170 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368188 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368205 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368222 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368239 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368259 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368279 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368298 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368314 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368330 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368347 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368365 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368384 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368403 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368420 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368439 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368454 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368467 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368483 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368502 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368522 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368540 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368557 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368574 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368591 5136 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368608 5136 reconstruct.go:97] "Volume reconstruction finished" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.368619 5136 reconciler.go:26] "Reconciler: start to sync state" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.376196 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.377663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.377698 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.377711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.378662 5136 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.378684 5136 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.378703 5136 state_mem.go:36] "Initialized new in-memory state store" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.391190 5136 policy_none.go:49] "None policy: Start" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.392210 5136 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.392239 5136 state_mem.go:35] "Initializing new in-memory state store" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.393413 5136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.395279 5136 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.395351 5136 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.395400 5136 kubelet.go:2335] "Starting kubelet main sync loop" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.395502 5136 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.397479 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.397528 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.444270 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464399 5136 manager.go:334] "Starting Device Plugin manager" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464452 5136 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464467 5136 server.go:79] "Starting device plugin registration server" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464943 5136 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.464961 5136 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.465229 5136 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.465317 5136 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.465329 5136 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.476708 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.495977 5136 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.496064 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499638 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.499839 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.501002 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.501047 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502118 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502130 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502295 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502803 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502853 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502874 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.502887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503348 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503358 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503479 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.503778 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506290 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506371 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506294 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506394 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506404 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506381 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506373 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506637 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506802 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.506843 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507303 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507316 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507403 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507425 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507434 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507497 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.507520 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.508375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.508404 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.508415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.544906 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566010 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566942 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566970 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.566979 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.567002 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.567317 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.570984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571044 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571087 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571109 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571128 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571147 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571168 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571214 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571238 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571302 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.571337 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672106 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672127 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672149 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672169 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672194 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672222 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672346 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672451 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672467 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672477 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672488 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672511 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672529 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672543 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672626 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.672694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.767486 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769273 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769322 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769357 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.769384 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.769993 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.836344 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.842219 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.870625 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.876106 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605 WatchSource:0}: Error finding container 0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605: Status 404 returned error can't find the container with id 0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605 Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.880617 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6 WatchSource:0}: Error finding container 40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6: Status 404 returned error can't find the container with id 40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6 Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.892799 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8 WatchSource:0}: Error finding container 517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8: Status 404 returned error can't find the container with id 517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8 Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.896052 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: I0320 06:49:28.904053 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.908400 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f WatchSource:0}: Error finding container 969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f: Status 404 returned error can't find the container with id 969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f Mar 20 06:49:28 crc kubenswrapper[5136]: W0320 06:49:28.917519 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591 WatchSource:0}: Error finding container 4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591: Status 404 returned error can't find the container with id 4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591 Mar 20 06:49:28 crc kubenswrapper[5136]: E0320 06:49:28.945861 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.147315 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.147405 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.170621 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171716 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171760 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171772 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.171799 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.172310 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.342287 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.400262 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"4a15e640fc6ee8591ba51b26d10b574565ecf686a861a0c9c439f8b0508ad591"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.401379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"969f650037db2a4ec2162d83b339cfdaae270274c907256ee9c5d530ad3bd25f"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.402551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"517f26af0658ddb977bdc7e5b325c63e73d20f866b2d48b8b544045f0bf011a8"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.403702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"40a72a96dbb90b4ee768105313c2e8ab551127e383ab23c934ef1b2b3edcf3c6"} Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.404804 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a193834d6eba9c791a237b787f15bba0b736e0a6ee749057b72eea9884ee605"} Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.427445 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.427516 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.746972 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.819525 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.819609 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.876020 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:29 crc kubenswrapper[5136]: W0320 06:49:29.890195 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.890295 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.972980 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974695 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:29 crc kubenswrapper[5136]: I0320 06:49:29.974728 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:29 crc kubenswrapper[5136]: E0320 06:49:29.975193 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.281758 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:30 crc kubenswrapper[5136]: E0320 06:49:30.283161 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.342483 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.411996 5136 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.412134 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.412173 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.413502 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.413599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.413634 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.414534 5136 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.414651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.414896 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417416 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417715 5136 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417845 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.417971 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423574 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" exitCode=0 Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423624 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423753 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.423923 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.427575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.427602 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.427614 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.429168 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.430253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.430309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.430333 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.431296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.431330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835"} Mar 20 06:49:30 crc kubenswrapper[5136]: I0320 06:49:30.431346 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.342529 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.347846 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.444095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.444208 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.448161 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.448197 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.448212 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.451066 5136 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72" exitCode=0 Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.451259 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.451235 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.453023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.453068 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.453086 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.455336 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.455372 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc"} Mar 20 06:49:31 crc kubenswrapper[5136]: W0320 06:49:31.461503 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.461600 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.463116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.463533 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.463566 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.472239 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.472322 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.472977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.473068 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.475209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.475252 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.475264 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478208 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478354 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478441 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc"} Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.478551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080"} Mar 20 06:49:31 crc kubenswrapper[5136]: W0320 06:49:31.503357 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.503904 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.571338 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.575897 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578160 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578181 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:31 crc kubenswrapper[5136]: I0320 06:49:31.578259 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.579018 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.163:6443: connect: connection refused" node="crc" Mar 20 06:49:31 crc kubenswrapper[5136]: W0320 06:49:31.960286 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:49:31 crc kubenswrapper[5136]: E0320 06:49:31.960428 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.284034 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.291563 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.483132 5136 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19" exitCode=0 Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.483244 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19"} Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.483377 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.484692 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.484728 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.484740 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55"} Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489377 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489400 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489426 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489389 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.489431 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.490975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491006 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491017 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491131 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491195 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491255 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.491308 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.492002 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.492037 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.492048 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:32 crc kubenswrapper[5136]: I0320 06:49:32.698212 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.312833 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497849 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497894 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497983 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497866 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.497895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.498285 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc"} Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499076 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499105 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499115 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499631 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499695 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499709 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499745 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.499768 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:33 crc kubenswrapper[5136]: I0320 06:49:33.914295 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503203 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503337 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503388 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.503183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15"} Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504611 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504651 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.504988 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505042 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505066 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505251 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505374 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.505402 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.568671 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.780186 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782039 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782131 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:34 crc kubenswrapper[5136]: I0320 06:49:34.782165 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.506422 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.506422 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508171 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508151 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.508213 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.630673 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.698396 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.698513 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.727641 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.727970 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.729454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.729502 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.729520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:35 crc kubenswrapper[5136]: I0320 06:49:35.902461 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.486100 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.510016 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.510078 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511517 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511888 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:36 crc kubenswrapper[5136]: I0320 06:49:36.511908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.512762 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.514415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.514473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:37 crc kubenswrapper[5136]: I0320 06:49:37.514491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:38 crc kubenswrapper[5136]: E0320 06:49:38.477222 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.343314 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.401940 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.403666 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.403924 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.408276 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.408335 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.409506 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.409566 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: W0320 06:49:42.411218 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.411276 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.412502 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" interval="6.4s" Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.412761 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:42 crc kubenswrapper[5136]: E0320 06:49:42.415350 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.421294 5136 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.421390 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.429645 5136 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:49:42 crc kubenswrapper[5136]: I0320 06:49:42.429703 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.322495 5136 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]log ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]etcd ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-filter ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-informers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-controllers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/crd-informer-synced ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-system-namespaces-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 20 06:49:43 crc kubenswrapper[5136]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/bootstrap-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/start-kube-aggregator-informers ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-registration-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-discovery-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]autoregister-completion ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapi-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 20 06:49:43 crc kubenswrapper[5136]: livez check failed Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.322569 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.344918 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:43Z is after 2026-02-23T05:33:13Z Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.530716 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.532884 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55" exitCode=255 Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.532934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55"} Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.533138 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.534316 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.534357 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.534395 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:43 crc kubenswrapper[5136]: I0320 06:49:43.535246 5136 scope.go:117] "RemoveContainer" containerID="8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.345954 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:44Z is after 2026-02-23T05:33:13Z Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.536871 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.537269 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539031 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" exitCode=255 Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539079 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d"} Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539141 5136 scope.go:117] "RemoveContainer" containerID="8d93960df6dce2dfdd966016501b971d075f10507bb052d8c270bbcd775f7b55" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.539271 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540072 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540113 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540129 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:44 crc kubenswrapper[5136]: I0320 06:49:44.540780 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:44 crc kubenswrapper[5136]: E0320 06:49:44.541007 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.346175 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:45Z is after 2026-02-23T05:33:13Z Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.545803 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.699093 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:49:45 crc kubenswrapper[5136]: I0320 06:49:45.699156 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.350624 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:46Z is after 2026-02-23T05:33:13Z Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.525485 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.525638 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.526860 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.526895 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.526905 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.539778 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.551140 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.552253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.552300 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:46 crc kubenswrapper[5136]: I0320 06:49:46.552312 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:47 crc kubenswrapper[5136]: I0320 06:49:47.344369 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:47Z is after 2026-02-23T05:33:13Z Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.320797 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.321079 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.322907 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.322952 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.322965 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.323566 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.323727 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.324902 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.346394 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.477367 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:48 crc kubenswrapper[5136]: W0320 06:49:48.517329 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.517437 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.556503 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.557851 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.557890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.557900 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.558457 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.558607 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.802398 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804096 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804107 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.804131 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.808639 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:49:48 crc kubenswrapper[5136]: E0320 06:49:48.816106 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:48Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.939996 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.940177 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.941514 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.941558 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:48 crc kubenswrapper[5136]: I0320 06:49:48.941567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:49 crc kubenswrapper[5136]: W0320 06:49:49.065585 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z Mar 20 06:49:49 crc kubenswrapper[5136]: E0320 06:49:49.065691 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:49 crc kubenswrapper[5136]: I0320 06:49:49.343989 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z Mar 20 06:49:49 crc kubenswrapper[5136]: W0320 06:49:49.661980 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z Mar 20 06:49:49 crc kubenswrapper[5136]: E0320 06:49:49.662073 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.092235 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.092400 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.093893 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.093963 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.093982 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.094756 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:50 crc kubenswrapper[5136]: E0320 06:49:50.095160 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.346612 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:50Z is after 2026-02-23T05:33:13Z Mar 20 06:49:50 crc kubenswrapper[5136]: I0320 06:49:50.847889 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:49:50 crc kubenswrapper[5136]: E0320 06:49:50.853467 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:50Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:51 crc kubenswrapper[5136]: I0320 06:49:51.349190 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:51Z is after 2026-02-23T05:33:13Z Mar 20 06:49:52 crc kubenswrapper[5136]: I0320 06:49:52.345283 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:52Z is after 2026-02-23T05:33:13Z Mar 20 06:49:52 crc kubenswrapper[5136]: E0320 06:49:52.417844 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:52Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.346687 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:53Z is after 2026-02-23T05:33:13Z Mar 20 06:49:53 crc kubenswrapper[5136]: W0320 06:49:53.585870 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:53Z is after 2026-02-23T05:33:13Z Mar 20 06:49:53 crc kubenswrapper[5136]: E0320 06:49:53.586001 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:53Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.914792 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.915082 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.916866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.916915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.916933 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:53 crc kubenswrapper[5136]: I0320 06:49:53.917777 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:49:53 crc kubenswrapper[5136]: E0320 06:49:53.918186 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:49:54 crc kubenswrapper[5136]: I0320 06:49:54.346958 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:54Z is after 2026-02-23T05:33:13Z Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.344543 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:55Z is after 2026-02-23T05:33:13Z Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699263 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699453 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699557 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.699767 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.701794 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.701871 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.701889 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.702659 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.702952 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835" gracePeriod=30 Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.809301 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.811757 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.812007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.812170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:55 crc kubenswrapper[5136]: I0320 06:49:55.812378 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:49:55 crc kubenswrapper[5136]: E0320 06:49:55.816072 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:55Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:49:55 crc kubenswrapper[5136]: E0320 06:49:55.822868 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:55Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.345716 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:56Z is after 2026-02-23T05:33:13Z Mar 20 06:49:56 crc kubenswrapper[5136]: W0320 06:49:56.498600 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:56Z is after 2026-02-23T05:33:13Z Mar 20 06:49:56 crc kubenswrapper[5136]: E0320 06:49:56.498726 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:56Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.580439 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581151 5136 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835" exitCode=255 Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835"} Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde"} Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.581397 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.582422 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.582478 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:56 crc kubenswrapper[5136]: I0320 06:49:56.582497 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.345163 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:57Z is after 2026-02-23T05:33:13Z Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.583502 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.584514 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.584545 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:49:57 crc kubenswrapper[5136]: I0320 06:49:57.584554 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:49:58 crc kubenswrapper[5136]: I0320 06:49:58.344528 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:58Z is after 2026-02-23T05:33:13Z Mar 20 06:49:58 crc kubenswrapper[5136]: E0320 06:49:58.477461 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:49:59 crc kubenswrapper[5136]: I0320 06:49:59.348073 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:49:59Z is after 2026-02-23T05:33:13Z Mar 20 06:50:00 crc kubenswrapper[5136]: I0320 06:50:00.346398 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:00Z is after 2026-02-23T05:33:13Z Mar 20 06:50:01 crc kubenswrapper[5136]: I0320 06:50:01.346645 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:01Z is after 2026-02-23T05:33:13Z Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.346992 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z Mar 20 06:50:02 crc kubenswrapper[5136]: E0320 06:50:02.425560 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.699104 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.699296 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.700884 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.700936 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.700958 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.816197 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.817955 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.818062 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.818088 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:02 crc kubenswrapper[5136]: I0320 06:50:02.818156 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:02 crc kubenswrapper[5136]: E0320 06:50:02.823407 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:02 crc kubenswrapper[5136]: E0320 06:50:02.828141 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:02Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:03 crc kubenswrapper[5136]: I0320 06:50:03.346592 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:03Z is after 2026-02-23T05:33:13Z Mar 20 06:50:04 crc kubenswrapper[5136]: I0320 06:50:04.346133 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:04Z is after 2026-02-23T05:33:13Z Mar 20 06:50:04 crc kubenswrapper[5136]: W0320 06:50:04.951777 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:04Z is after 2026-02-23T05:33:13Z Mar 20 06:50:04 crc kubenswrapper[5136]: E0320 06:50:04.951937 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:04Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.347072 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:05Z is after 2026-02-23T05:33:13Z Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.699204 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.699277 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.728531 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.728734 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.730231 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.730298 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:05 crc kubenswrapper[5136]: I0320 06:50:05.730321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.346292 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:06Z is after 2026-02-23T05:33:13Z Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.396401 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.397846 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.397887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.397905 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:06 crc kubenswrapper[5136]: I0320 06:50:06.398494 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.346147 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:07Z is after 2026-02-23T05:33:13Z Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.616051 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.616567 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618522 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" exitCode=255 Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618586 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c"} Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618655 5136 scope.go:117] "RemoveContainer" containerID="51c14348a58909a5e7e089be0f655bf8857191f27a386f41ff90e29c82fc585d" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.618981 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.620508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.620531 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.620543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.621055 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:07 crc kubenswrapper[5136]: E0320 06:50:07.621223 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:07 crc kubenswrapper[5136]: I0320 06:50:07.757070 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:50:07 crc kubenswrapper[5136]: E0320 06:50:07.761568 5136 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:07Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:07 crc kubenswrapper[5136]: E0320 06:50:07.762779 5136 certificate_manager.go:440] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Reached backoff limit, still unable to rotate certs: timed out waiting for the condition" logger="UnhandledError" Mar 20 06:50:08 crc kubenswrapper[5136]: I0320 06:50:08.345898 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:08Z is after 2026-02-23T05:33:13Z Mar 20 06:50:08 crc kubenswrapper[5136]: E0320 06:50:08.477976 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:08 crc kubenswrapper[5136]: I0320 06:50:08.625121 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.344378 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:09Z is after 2026-02-23T05:33:13Z Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.823786 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825331 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825395 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:09 crc kubenswrapper[5136]: I0320 06:50:09.825449 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:09 crc kubenswrapper[5136]: E0320 06:50:09.830653 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:09Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:09 crc kubenswrapper[5136]: E0320 06:50:09.834386 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:09Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.092927 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.093079 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094196 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094224 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094233 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.094746 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:10 crc kubenswrapper[5136]: E0320 06:50:10.094942 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:10 crc kubenswrapper[5136]: I0320 06:50:10.344519 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:10Z is after 2026-02-23T05:33:13Z Mar 20 06:50:10 crc kubenswrapper[5136]: W0320 06:50:10.596165 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:10Z is after 2026-02-23T05:33:13Z Mar 20 06:50:10 crc kubenswrapper[5136]: E0320 06:50:10.596273 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:10Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:11 crc kubenswrapper[5136]: I0320 06:50:11.347061 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:11Z is after 2026-02-23T05:33:13Z Mar 20 06:50:12 crc kubenswrapper[5136]: I0320 06:50:12.345779 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:12Z is after 2026-02-23T05:33:13Z Mar 20 06:50:12 crc kubenswrapper[5136]: E0320 06:50:12.431503 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:12Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.348313 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:13Z is after 2026-02-23T05:33:13Z Mar 20 06:50:13 crc kubenswrapper[5136]: W0320 06:50:13.810226 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:13Z is after 2026-02-23T05:33:13Z Mar 20 06:50:13 crc kubenswrapper[5136]: E0320 06:50:13.810304 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:13Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.915162 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.915646 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.917431 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.917475 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.917487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:13 crc kubenswrapper[5136]: I0320 06:50:13.919292 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:13 crc kubenswrapper[5136]: E0320 06:50:13.919919 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:14 crc kubenswrapper[5136]: I0320 06:50:14.345787 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:14Z is after 2026-02-23T05:33:13Z Mar 20 06:50:15 crc kubenswrapper[5136]: I0320 06:50:15.348257 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:15Z is after 2026-02-23T05:33:13Z Mar 20 06:50:15 crc kubenswrapper[5136]: I0320 06:50:15.699530 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:50:15 crc kubenswrapper[5136]: I0320 06:50:15.699666 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.349148 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:16Z is after 2026-02-23T05:33:13Z Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.831733 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833091 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833134 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833145 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:16 crc kubenswrapper[5136]: I0320 06:50:16.833170 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:16 crc kubenswrapper[5136]: E0320 06:50:16.836166 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:16Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:16 crc kubenswrapper[5136]: E0320 06:50:16.838307 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:16Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:17 crc kubenswrapper[5136]: W0320 06:50:17.159957 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:17Z is after 2026-02-23T05:33:13Z Mar 20 06:50:17 crc kubenswrapper[5136]: E0320 06:50:17.160356 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:17Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Mar 20 06:50:17 crc kubenswrapper[5136]: I0320 06:50:17.347380 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:17Z is after 2026-02-23T05:33:13Z Mar 20 06:50:18 crc kubenswrapper[5136]: I0320 06:50:18.346590 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:18Z is after 2026-02-23T05:33:13Z Mar 20 06:50:18 crc kubenswrapper[5136]: E0320 06:50:18.478081 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:19 crc kubenswrapper[5136]: I0320 06:50:19.344604 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:19Z is after 2026-02-23T05:33:13Z Mar 20 06:50:20 crc kubenswrapper[5136]: I0320 06:50:20.344124 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:20Z is after 2026-02-23T05:33:13Z Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.343964 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:21Z is after 2026-02-23T05:33:13Z Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.574766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.574946 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.575913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.575951 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:21 crc kubenswrapper[5136]: I0320 06:50:21.575964 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:22 crc kubenswrapper[5136]: I0320 06:50:22.345001 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:22Z is after 2026-02-23T05:33:13Z Mar 20 06:50:22 crc kubenswrapper[5136]: E0320 06:50:22.435489 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:22Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.346410 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:23Z is after 2026-02-23T05:33:13Z Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.836752 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838406 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838423 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:23 crc kubenswrapper[5136]: I0320 06:50:23.838461 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:23 crc kubenswrapper[5136]: E0320 06:50:23.841941 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:23Z is after 2026-02-23T05:33:13Z" interval="7s" Mar 20 06:50:23 crc kubenswrapper[5136]: E0320 06:50:23.843848 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:23Z is after 2026-02-23T05:33:13Z" node="crc" Mar 20 06:50:24 crc kubenswrapper[5136]: I0320 06:50:24.349915 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:24Z is after 2026-02-23T05:33:13Z Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.347657 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.698727 5136 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.698829 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.698897 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.699074 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.700360 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.700430 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.700449 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.701208 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 20 06:50:25 crc kubenswrapper[5136]: I0320 06:50:25.701373 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde" gracePeriod=30 Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.349445 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.396494 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.397952 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.398022 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.398048 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.398904 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:26 crc kubenswrapper[5136]: E0320 06:50:26.399251 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.677176 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678397 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678856 5136 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde" exitCode=255 Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678908 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde"} Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9"} Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.678964 5136 scope.go:117] "RemoveContainer" containerID="e3b46faa2d214ca93f46cc423d2a8cc40390e007424f3685c10e23d023d07835" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.679140 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.680563 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.680599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:26 crc kubenswrapper[5136]: I0320 06:50:26.680609 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:27 crc kubenswrapper[5136]: I0320 06:50:27.345742 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:27 crc kubenswrapper[5136]: I0320 06:50:27.685354 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:50:28 crc kubenswrapper[5136]: I0320 06:50:28.347877 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:28 crc kubenswrapper[5136]: E0320 06:50:28.478215 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:29 crc kubenswrapper[5136]: I0320 06:50:29.346244 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.349640 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.844604 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846029 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846072 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:30 crc kubenswrapper[5136]: I0320 06:50:30.846118 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:30 crc kubenswrapper[5136]: E0320 06:50:30.850036 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 06:50:30 crc kubenswrapper[5136]: E0320 06:50:30.850088 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 06:50:31 crc kubenswrapper[5136]: I0320 06:50:31.346149 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.346619 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.441176 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7731f22a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,LastTimestamp:2026-03-20 06:49:28.339493418 +0000 UTC m=+0.598804579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.448034 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.452244 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.458302 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.463081 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7eea37d0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.469010384 +0000 UTC m=+0.728321535,LastTimestamp:2026-03-20 06:49:28.469010384 +0000 UTC m=+0.728321535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.469073 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.499615694 +0000 UTC m=+0.758926845,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.472329 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.499634555 +0000 UTC m=+0.758945706,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.478297 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.499643976 +0000 UTC m=+0.758955137,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.483926 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.502105585 +0000 UTC m=+0.761416736,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.487271 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.502124576 +0000 UTC m=+0.761435727,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.492746 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.502135806 +0000 UTC m=+0.761446957,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.496667 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.502867196 +0000 UTC m=+0.762178347,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.499910 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.502882847 +0000 UTC m=+0.762193988,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.504343 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.502921508 +0000 UTC m=+0.762232659,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.505608 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.503343476 +0000 UTC m=+0.762654617,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.511726 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.503355356 +0000 UTC m=+0.762666507,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.515199 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.503363326 +0000 UTC m=+0.762674477,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.521023 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.506365488 +0000 UTC m=+0.765676639,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.531500 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.506378389 +0000 UTC m=+0.765689540,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.536013 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.506388039 +0000 UTC m=+0.765699190,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.540517 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.506401229 +0000 UTC m=+0.765712380,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.546144 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.50640972 +0000 UTC m=+0.765720871,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.551551 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7979335a\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7979335a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377717594 +0000 UTC m=+0.637028755,LastTimestamp:2026-03-20 06:49:28.506453132 +0000 UTC m=+0.765764303,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.556006 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee7978cb3e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee7978cb3e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377690942 +0000 UTC m=+0.637002103,LastTimestamp:2026-03-20 06:49:28.506547245 +0000 UTC m=+0.765858396,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.560569 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189e79ee79790663\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189e79ee79790663 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.377706083 +0000 UTC m=+0.637017244,LastTimestamp:2026-03-20 06:49:28.506558656 +0000 UTC m=+0.765869807,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.568358 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ee976c7828 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.88019972 +0000 UTC m=+1.139510871,LastTimestamp:2026-03-20 06:49:28.88019972 +0000 UTC m=+1.139510871,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.575326 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79ee9799ea1a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.88317801 +0000 UTC m=+1.142489151,LastTimestamp:2026-03-20 06:49:28.88317801 +0000 UTC m=+1.142489151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.579927 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ee9855b16c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.895484268 +0000 UTC m=+1.154795419,LastTimestamp:2026-03-20 06:49:28.895484268 +0000 UTC m=+1.154795419,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.584259 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ee993d1bef openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.910650351 +0000 UTC m=+1.169961502,LastTimestamp:2026-03-20 06:49:28.910650351 +0000 UTC m=+1.169961502,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.588575 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79ee99ebbe14 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:28.922095124 +0000 UTC m=+1.181406275,LastTimestamp:2026-03-20 06:49:28.922095124 +0000 UTC m=+1.181406275,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.592860 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79eebf61d999 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550592409 +0000 UTC m=+1.809903560,LastTimestamp:2026-03-20 06:49:29.550592409 +0000 UTC m=+1.809903560,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.597123 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79eebf61d9ad openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550592429 +0000 UTC m=+1.809903580,LastTimestamp:2026-03-20 06:49:29.550592429 +0000 UTC m=+1.809903580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.601037 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79eebf62740e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.55063195 +0000 UTC m=+1.809943101,LastTimestamp:2026-03-20 06:49:29.55063195 +0000 UTC m=+1.809943101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.606036 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eebf6299ad openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550641581 +0000 UTC m=+1.809952732,LastTimestamp:2026-03-20 06:49:29.550641581 +0000 UTC m=+1.809952732,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.610657 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79eebf62d201 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.550656001 +0000 UTC m=+1.809967152,LastTimestamp:2026-03-20 06:49:29.550656001 +0000 UTC m=+1.809967152,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.618008 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79eec0627c4f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.567411279 +0000 UTC m=+1.826722430,LastTimestamp:2026-03-20 06:49:29.567411279 +0000 UTC m=+1.826722430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.623607 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79eec07df78d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569212301 +0000 UTC m=+1.828523452,LastTimestamp:2026-03-20 06:49:29.569212301 +0000 UTC m=+1.828523452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.628500 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79eec07e8c59 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569250393 +0000 UTC m=+1.828561544,LastTimestamp:2026-03-20 06:49:29.569250393 +0000 UTC m=+1.828561544,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.633046 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79eec080d3cf openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569399759 +0000 UTC m=+1.828710910,LastTimestamp:2026-03-20 06:49:29.569399759 +0000 UTC m=+1.828710910,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.636247 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eec0843532 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.569621298 +0000 UTC m=+1.828932449,LastTimestamp:2026-03-20 06:49:29.569621298 +0000 UTC m=+1.828932449,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.640055 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eec091c540 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.570510144 +0000 UTC m=+1.829821295,LastTimestamp:2026-03-20 06:49:29.570510144 +0000 UTC m=+1.829821295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.643392 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed175c3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.853887477 +0000 UTC m=+2.113198638,LastTimestamp:2026-03-20 06:49:29.853887477 +0000 UTC m=+2.113198638,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.649380 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed247922d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.867637293 +0000 UTC m=+2.126948454,LastTimestamp:2026-03-20 06:49:29.867637293 +0000 UTC m=+2.126948454,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.654246 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed258c823 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.868765219 +0000 UTC m=+2.128076370,LastTimestamp:2026-03-20 06:49:29.868765219 +0000 UTC m=+2.128076370,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.657757 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eee2523616 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.13677007 +0000 UTC m=+2.396081231,LastTimestamp:2026-03-20 06:49:30.13677007 +0000 UTC m=+2.396081231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.662042 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eee4675f8e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.171711374 +0000 UTC m=+2.431022525,LastTimestamp:2026-03-20 06:49:30.171711374 +0000 UTC m=+2.431022525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.667745 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eee47d39f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.173143541 +0000 UTC m=+2.432454692,LastTimestamp:2026-03-20 06:49:30.173143541 +0000 UTC m=+2.432454692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.672759 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79eef2ff37ef openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.416543727 +0000 UTC m=+2.675854908,LastTimestamp:2026-03-20 06:49:30.416543727 +0000 UTC m=+2.675854908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.676389 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79eef34d423f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.421658175 +0000 UTC m=+2.680969336,LastTimestamp:2026-03-20 06:49:30.421658175 +0000 UTC m=+2.680969336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.680779 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79eef3add8ac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.42798814 +0000 UTC m=+2.687299331,LastTimestamp:2026-03-20 06:49:30.42798814 +0000 UTC m=+2.687299331,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.684110 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79eef3bd95b4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.429019572 +0000 UTC m=+2.688330733,LastTimestamp:2026-03-20 06:49:30.429019572 +0000 UTC m=+2.688330733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.687603 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eef7f860d3 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.499981523 +0000 UTC m=+2.759292684,LastTimestamp:2026-03-20 06:49:30.499981523 +0000 UTC m=+2.759292684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.691853 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eefb3336a2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.554168994 +0000 UTC m=+2.813480185,LastTimestamp:2026-03-20 06:49:30.554168994 +0000 UTC m=+2.813480185,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.695047 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef01135582 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.652743042 +0000 UTC m=+2.912054213,LastTimestamp:2026-03-20 06:49:30.652743042 +0000 UTC m=+2.912054213,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.698405 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.698564 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.700556 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.700591 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:32 crc kubenswrapper[5136]: I0320 06:50:32.700600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.701018 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79ef01a7df4a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.662477642 +0000 UTC m=+2.921788803,LastTimestamp:2026-03-20 06:49:30.662477642 +0000 UTC m=+2.921788803,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.704872 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef02238c5f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.670582879 +0000 UTC m=+2.929894030,LastTimestamp:2026-03-20 06:49:30.670582879 +0000 UTC m=+2.929894030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.709902 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189e79ef02fb8fe1 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.684739553 +0000 UTC m=+2.944050704,LastTimestamp:2026-03-20 06:49:30.684739553 +0000 UTC m=+2.944050704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.711954 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef03932db6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.694675894 +0000 UTC m=+2.953987035,LastTimestamp:2026-03-20 06:49:30.694675894 +0000 UTC m=+2.953987035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.714273 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef03958942 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.694830402 +0000 UTC m=+2.954141553,LastTimestamp:2026-03-20 06:49:30.694830402 +0000 UTC m=+2.954141553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.718710 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef03b48d8f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.696863119 +0000 UTC m=+2.956174270,LastTimestamp:2026-03-20 06:49:30.696863119 +0000 UTC m=+2.956174270,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.719653 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef0826b76f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.771453807 +0000 UTC m=+3.030764958,LastTimestamp:2026-03-20 06:49:30.771453807 +0000 UTC m=+3.030764958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.725874 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef091ef484 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.787722372 +0000 UTC m=+3.047033523,LastTimestamp:2026-03-20 06:49:30.787722372 +0000 UTC m=+3.047033523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.730069 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef093140c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.788921544 +0000 UTC m=+3.048232695,LastTimestamp:2026-03-20 06:49:30.788921544 +0000 UTC m=+3.048232695,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.734431 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef0e625861 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.876024929 +0000 UTC m=+3.135336120,LastTimestamp:2026-03-20 06:49:30.876024929 +0000 UTC m=+3.135336120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.740494 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef0f0d0ef3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.887212787 +0000 UTC m=+3.146523958,LastTimestamp:2026-03-20 06:49:30.887212787 +0000 UTC m=+3.146523958,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.747181 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef0f27085d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.888915037 +0000 UTC m=+3.148226188,LastTimestamp:2026-03-20 06:49:30.888915037 +0000 UTC m=+3.148226188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.751264 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1354b2b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.959016629 +0000 UTC m=+3.218327780,LastTimestamp:2026-03-20 06:49:30.959016629 +0000 UTC m=+3.218327780,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.758575 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef143417c3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.973657027 +0000 UTC m=+3.232968178,LastTimestamp:2026-03-20 06:49:30.973657027 +0000 UTC m=+3.232968178,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.762647 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef144834f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:30.974975217 +0000 UTC m=+3.234286368,LastTimestamp:2026-03-20 06:49:30.974975217 +0000 UTC m=+3.234286368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.769108 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef1adcb515 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.085370645 +0000 UTC m=+3.344681816,LastTimestamp:2026-03-20 06:49:31.085370645 +0000 UTC m=+3.344681816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.776386 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189e79ef1c776671 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.112285809 +0000 UTC m=+3.371596980,LastTimestamp:2026-03-20 06:49:31.112285809 +0000 UTC m=+3.371596980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.782047 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1e54a54c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.143562572 +0000 UTC m=+3.402873733,LastTimestamp:2026-03-20 06:49:31.143562572 +0000 UTC m=+3.402873733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.787691 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1f385c1d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.158486045 +0000 UTC m=+3.417797196,LastTimestamp:2026-03-20 06:49:31.158486045 +0000 UTC m=+3.417797196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.791580 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef1f52d4f5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.160220917 +0000 UTC m=+3.419532068,LastTimestamp:2026-03-20 06:49:31.160220917 +0000 UTC m=+3.419532068,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.795545 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2cb02118 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.384439064 +0000 UTC m=+3.643750215,LastTimestamp:2026-03-20 06:49:31.384439064 +0000 UTC m=+3.643750215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.799632 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2d69b59a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.396601242 +0000 UTC m=+3.655912413,LastTimestamp:2026-03-20 06:49:31.396601242 +0000 UTC m=+3.655912413,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.803416 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2d7f5733 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.398018867 +0000 UTC m=+3.657330028,LastTimestamp:2026-03-20 06:49:31.398018867 +0000 UTC m=+3.657330028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.809908 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef30f85829 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.456280617 +0000 UTC m=+3.715591788,LastTimestamp:2026-03-20 06:49:31.456280617 +0000 UTC m=+3.715591788,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.815712 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef3b7fde92 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.632934546 +0000 UTC m=+3.892245697,LastTimestamp:2026-03-20 06:49:31.632934546 +0000 UTC m=+3.892245697,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.821667 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef3c1f0b8c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.643366284 +0000 UTC m=+3.902677435,LastTimestamp:2026-03-20 06:49:31.643366284 +0000 UTC m=+3.902677435,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.825279 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef3fbdad47 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.704094023 +0000 UTC m=+3.963405174,LastTimestamp:2026-03-20 06:49:31.704094023 +0000 UTC m=+3.963405174,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.830701 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef40fda367 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.725063015 +0000 UTC m=+3.984374166,LastTimestamp:2026-03-20 06:49:31.725063015 +0000 UTC m=+3.984374166,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.834313 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef6e5abf3e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.48613971 +0000 UTC m=+4.745450901,LastTimestamp:2026-03-20 06:49:32.48613971 +0000 UTC m=+4.745450901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.839236 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef79eba285 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.680184453 +0000 UTC m=+4.939495614,LastTimestamp:2026-03-20 06:49:32.680184453 +0000 UTC m=+4.939495614,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.843615 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef7adeae27 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.696112679 +0000 UTC m=+4.955423830,LastTimestamp:2026-03-20 06:49:32.696112679 +0000 UTC m=+4.955423830,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.848249 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef7aee2059 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.697124953 +0000 UTC m=+4.956436104,LastTimestamp:2026-03-20 06:49:32.697124953 +0000 UTC m=+4.956436104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.852289 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef85fdb37e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.882695038 +0000 UTC m=+5.142006189,LastTimestamp:2026-03-20 06:49:32.882695038 +0000 UTC m=+5.142006189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.857217 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef86e121a8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.897599912 +0000 UTC m=+5.156911063,LastTimestamp:2026-03-20 06:49:32.897599912 +0000 UTC m=+5.156911063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.863006 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef86eef33a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:32.89850553 +0000 UTC m=+5.157816681,LastTimestamp:2026-03-20 06:49:32.89850553 +0000 UTC m=+5.157816681,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.866326 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef9501c0f6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.13461887 +0000 UTC m=+5.393930031,LastTimestamp:2026-03-20 06:49:33.13461887 +0000 UTC m=+5.393930031,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.870484 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef95e039fe openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.149198846 +0000 UTC m=+5.408510007,LastTimestamp:2026-03-20 06:49:33.149198846 +0000 UTC m=+5.408510007,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.874340 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79ef95f27e42 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.15039597 +0000 UTC m=+5.409707131,LastTimestamp:2026-03-20 06:49:33.15039597 +0000 UTC m=+5.409707131,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.880096 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efa32ffaa5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.372529317 +0000 UTC m=+5.631840478,LastTimestamp:2026-03-20 06:49:33.372529317 +0000 UTC m=+5.631840478,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.883893 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efa4256882 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.388613762 +0000 UTC m=+5.647924953,LastTimestamp:2026-03-20 06:49:33.388613762 +0000 UTC m=+5.647924953,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.887581 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efa43cdb70 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.390150512 +0000 UTC m=+5.649461703,LastTimestamp:2026-03-20 06:49:33.390150512 +0000 UTC m=+5.649461703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.893687 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efb3077eb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.638311608 +0000 UTC m=+5.897622759,LastTimestamp:2026-03-20 06:49:33.638311608 +0000 UTC m=+5.897622759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.899500 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189e79efb3a6c41c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:33.648749596 +0000 UTC m=+5.908060787,LastTimestamp:2026-03-20 06:49:33.648749596 +0000 UTC m=+5.908060787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.905805 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f02dd31f26 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:32 crc kubenswrapper[5136]: body: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:35.698476838 +0000 UTC m=+7.957788019,LastTimestamp:2026-03-20 06:49:35.698476838 +0000 UTC m=+7.957788019,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.912089 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f02dd49507 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:35.698572551 +0000 UTC m=+7.957883742,LastTimestamp:2026-03-20 06:49:35.698572551 +0000 UTC m=+7.957883742,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.920101 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8a47f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 06:50:32 crc kubenswrapper[5136]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:50:32 crc kubenswrapper[5136]: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421366769 +0000 UTC m=+14.680677910,LastTimestamp:2026-03-20 06:49:42.421366769 +0000 UTC m=+14.680677910,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.925675 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8af9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421412271 +0000 UTC m=+14.680723422,LastTimestamp:2026-03-20 06:49:42.421412271 +0000 UTC m=+14.680723422,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.930561 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e79f1be8a47f1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8a47f1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Mar 20 06:50:32 crc kubenswrapper[5136]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Mar 20 06:50:32 crc kubenswrapper[5136]: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421366769 +0000 UTC m=+14.680677910,LastTimestamp:2026-03-20 06:49:42.429685138 +0000 UTC m=+14.688996289,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.936663 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e79f1be8af9af\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79f1be8af9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:42.421412271 +0000 UTC m=+14.680723422,LastTimestamp:2026-03-20 06:49:42.429728139 +0000 UTC m=+14.689039290,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.940747 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-apiserver-crc.189e79f1f4415556 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Mar 20 06:50:32 crc kubenswrapper[5136]: body: [+]ping ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]log ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]etcd ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/generic-apiserver-start-informers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-filter ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-informers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-apiextensions-controllers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/crd-informer-synced ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-system-namespaces-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 20 06:50:32 crc kubenswrapper[5136]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/bootstrap-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/start-kube-aggregator-informers ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-registration-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-discovery-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]autoregister-completion ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapi-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 20 06:50:32 crc kubenswrapper[5136]: livez check failed Mar 20 06:50:32 crc kubenswrapper[5136]: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:43.322555734 +0000 UTC m=+15.581866885,LastTimestamp:2026-03-20 06:49:43.322555734 +0000 UTC m=+15.581866885,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.946457 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79f1f441dae5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:43.322589925 +0000 UTC m=+15.581901076,LastTimestamp:2026-03-20 06:49:43.322589925 +0000 UTC m=+15.581901076,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.950635 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189e79ef2d7f5733\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189e79ef2d7f5733 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:31.398018867 +0000 UTC m=+3.657330028,LastTimestamp:2026-03-20 06:49:43.536722103 +0000 UTC m=+15.796033264,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.956951 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:32 crc kubenswrapper[5136]: body: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.960664 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e9d62c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,LastTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.972844 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e92298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:32 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:32 crc kubenswrapper[5136]: body: Mar 20 06:50:32 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:49:55.699408977 +0000 UTC m=+27.958720158,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:32 crc kubenswrapper[5136]: > Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.976846 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e9d62c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e9d62c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,LastTimestamp:2026-03-20 06:49:55.69951956 +0000 UTC m=+27.958830751,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.981863 5136 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f4d62ecd4a openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:55.702926666 +0000 UTC m=+27.962237857,LastTimestamp:2026-03-20 06:49:55.702926666 +0000 UTC m=+27.962237857,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.986355 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79eec091c540\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eec091c540 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.570510144 +0000 UTC m=+1.829821295,LastTimestamp:2026-03-20 06:49:55.817482932 +0000 UTC m=+28.076794123,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.991985 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79eed175c3f5\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed175c3f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.853887477 +0000 UTC m=+2.113198638,LastTimestamp:2026-03-20 06:49:56.00425945 +0000 UTC m=+28.263570601,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:32 crc kubenswrapper[5136]: E0320 06:50:32.996415 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79eed247922d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79eed247922d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:29.867637293 +0000 UTC m=+2.126948454,LastTimestamp:2026-03-20 06:49:56.013387904 +0000 UTC m=+28.272699045,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:33 crc kubenswrapper[5136]: E0320 06:50:33.003006 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e92298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:33 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:33 crc kubenswrapper[5136]: body: Mar 20 06:50:33 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:50:05.699256644 +0000 UTC m=+37.958567835,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:33 crc kubenswrapper[5136]: > Mar 20 06:50:33 crc kubenswrapper[5136]: E0320 06:50:33.008233 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e9d62c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e9d62c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.69918622 +0000 UTC m=+17.958497381,LastTimestamp:2026-03-20 06:50:05.699312445 +0000 UTC m=+37.958623626,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:50:33 crc kubenswrapper[5136]: E0320 06:50:33.014107 5136 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189e79f281e92298\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Mar 20 06:50:33 crc kubenswrapper[5136]: &Event{ObjectMeta:{kube-controller-manager-crc.189e79f281e92298 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Mar 20 06:50:33 crc kubenswrapper[5136]: body: Mar 20 06:50:33 crc kubenswrapper[5136]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:49:45.699140248 +0000 UTC m=+17.958451409,LastTimestamp:2026-03-20 06:50:15.699582253 +0000 UTC m=+47.958893434,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Mar 20 06:50:33 crc kubenswrapper[5136]: > Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.345226 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.768455 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.768621 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.769208 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.770025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.770234 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:33 crc kubenswrapper[5136]: I0320 06:50:33.770379 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.345614 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.703960 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.705038 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.705112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:34 crc kubenswrapper[5136]: I0320 06:50:34.705133 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:35 crc kubenswrapper[5136]: I0320 06:50:35.345587 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:36 crc kubenswrapper[5136]: I0320 06:50:36.348910 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.345926 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.851040 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852292 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852331 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852346 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:37 crc kubenswrapper[5136]: I0320 06:50:37.852397 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:37 crc kubenswrapper[5136]: E0320 06:50:37.856507 5136 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Mar 20 06:50:37 crc kubenswrapper[5136]: E0320 06:50:37.857027 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.349015 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.396055 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398011 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398060 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398076 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.398890 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:38 crc kubenswrapper[5136]: E0320 06:50:38.478329 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:38 crc kubenswrapper[5136]: W0320 06:50:38.506750 5136 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 20 06:50:38 crc kubenswrapper[5136]: E0320 06:50:38.506800 5136 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.717635 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.719610 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2"} Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.719744 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.720904 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.720933 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:38 crc kubenswrapper[5136]: I0320 06:50:38.720945 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.346167 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.724246 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.724837 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726472 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" exitCode=255 Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726499 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2"} Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726540 5136 scope.go:117] "RemoveContainer" containerID="f25fb30a68fee9c11b53a300d7d79a5fc097ea8dfca4b8f12cad9a41189e516c" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.726679 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.727715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.727751 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.727764 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.730975 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:39 crc kubenswrapper[5136]: E0320 06:50:39.731217 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.764880 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 20 06:50:39 crc kubenswrapper[5136]: I0320 06:50:39.779263 5136 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.092648 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.349474 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.729957 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.731325 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732059 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732091 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732104 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:40 crc kubenswrapper[5136]: I0320 06:50:40.732592 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:40 crc kubenswrapper[5136]: E0320 06:50:40.732772 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:41 crc kubenswrapper[5136]: I0320 06:50:41.347291 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:42 crc kubenswrapper[5136]: I0320 06:50:42.348897 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.345569 5136 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.379457 5136 csr.go:261] certificate signing request csr-9qcnf is approved, waiting to be issued Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.389408 5136 csr.go:257] certificate signing request csr-9qcnf is issued Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.403666 5136 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.914726 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.914935 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.919407 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.919471 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.919486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:43 crc kubenswrapper[5136]: I0320 06:50:43.922069 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:43 crc kubenswrapper[5136]: E0320 06:50:43.922466 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.202118 5136 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.391944 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2027-01-03 00:20:41.686041867 +0000 UTC Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.391989 5136 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6929h29m57.294056471s for next certificate rotation Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.857045 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858233 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.858382 5136 kubelet_node_status.go:76] "Attempting to register node" node="crc" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.868267 5136 kubelet_node_status.go:115] "Node was previously registered" node="crc" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.868548 5136 kubelet_node_status.go:79] "Successfully registered node" node="crc" Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.868565 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873019 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873199 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.873223 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.893883 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903752 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903761 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903777 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.903788 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.913256 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919404 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919437 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919446 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919459 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.919470 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.932495 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942850 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942893 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942933 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942948 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:44 crc kubenswrapper[5136]: I0320 06:50:44.942958 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:44Z","lastTransitionTime":"2026-03-20T06:50:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.955119 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.955234 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:50:44 crc kubenswrapper[5136]: E0320 06:50:44.955260 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.055483 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.155861 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.257027 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.357185 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.458166 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.558679 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.658837 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.732502 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.732647 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.733685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.733750 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:45 crc kubenswrapper[5136]: I0320 06:50:45.733771 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.758926 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.859095 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:45 crc kubenswrapper[5136]: E0320 06:50:45.959514 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.060649 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.161184 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.261338 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.361888 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.462659 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.563292 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.663865 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.764600 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.865730 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:46 crc kubenswrapper[5136]: E0320 06:50:46.966030 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.066177 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.167095 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.268277 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.369436 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.470489 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.571288 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.672182 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.772541 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:47 crc kubenswrapper[5136]: E0320 06:50:47.872659 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:47.973490 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.074312 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.174616 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.275367 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.375596 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.476112 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.479471 5136 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.577005 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.678173 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.779222 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.880153 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:48 crc kubenswrapper[5136]: E0320 06:50:48.980510 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.080659 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.181920 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.282276 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.382877 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.483541 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.583990 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.685003 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.785884 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: I0320 06:50:49.796060 5136 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.886187 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:49 crc kubenswrapper[5136]: E0320 06:50:49.987337 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.088346 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: I0320 06:50:50.148436 5136 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.189526 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.290305 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.390913 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.491445 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.592562 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.692757 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.793020 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.893262 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:50 crc kubenswrapper[5136]: E0320 06:50:50.993777 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.094602 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.194721 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.295915 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.396135 5136 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.397044 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.397569 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.397620 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:51 crc kubenswrapper[5136]: I0320 06:50:51.397639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.497156 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.597747 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.698881 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.800002 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:51 crc kubenswrapper[5136]: E0320 06:50:51.901164 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.002292 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.103267 5136 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.156726 5136 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206381 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206458 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206478 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.206903 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.207119 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310588 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310653 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.310693 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.376866 5136 apiserver.go:52] "Watching apiserver" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.382674 5136 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383067 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383754 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383886 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.383968 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.384036 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.384114 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.383883 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.384478 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.384866 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.384953 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389575 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389672 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389737 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389756 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.389862 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390077 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390336 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390539 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.390962 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413582 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413638 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.413702 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.416494 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.431937 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.444609 5136 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.448144 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.467047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478134 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478197 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478261 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478290 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478351 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478539 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478654 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478723 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478790 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478927 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478958 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.478987 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479053 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479084 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479119 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479157 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479320 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479159 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479181 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479436 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479456 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479765 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479850 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479914 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.479939 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480013 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480162 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480266 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480424 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480436 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480538 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480598 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480656 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480681 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480737 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480865 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480909 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480956 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481003 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481037 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481059 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481091 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481115 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481231 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481259 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481304 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481327 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481349 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481395 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481416 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481438 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481511 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481533 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481556 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481592 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481614 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481652 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481681 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481709 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481785 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481807 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481854 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481878 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481972 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481996 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482056 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482079 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482126 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482150 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482197 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482247 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482293 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482315 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482362 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482387 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482412 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482457 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482482 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482516 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482548 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482648 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482681 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482714 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482903 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482927 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483066 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483254 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480623 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480911 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480782 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480962 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.480987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481061 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481083 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481117 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483655 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481218 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481361 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481482 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481523 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481902 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.481971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482216 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482235 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.482801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483734 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483913 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.484898 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485009 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485401 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485480 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485504 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.485912 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486231 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.483306 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486449 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486496 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486539 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486580 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486656 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486738 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486792 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486939 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.486997 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487050 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487303 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487433 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487552 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487691 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487754 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487852 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487966 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488073 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488134 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488310 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488393 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488683 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488729 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488767 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488805 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488945 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489000 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489054 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489115 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489171 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489329 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489572 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489621 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489726 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489782 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489935 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489991 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490046 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490105 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490222 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490333 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490388 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490446 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490505 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490562 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490734 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490810 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490941 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491051 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491099 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491190 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491236 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491279 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491441 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491593 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491638 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491662 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491685 5136 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491706 5136 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491747 5136 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491790 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491858 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491884 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491907 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491930 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491952 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491994 5136 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492027 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492058 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492087 5136 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492119 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492152 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492185 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492223 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492258 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492290 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492320 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492349 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492374 5136 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492396 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492419 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492441 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492464 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492487 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492512 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492533 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492555 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492577 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492598 5136 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492621 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492646 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492695 5136 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492734 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492765 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492788 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498975 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487224 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487923 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.487997 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.488068 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.489517 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490056 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490080 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490277 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490410 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.490689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491277 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491400 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491444 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491455 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492390 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.491233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492700 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492665 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.492916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.500088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493373 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493768 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.493787 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494426 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494513 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495103 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495268 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495492 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495612 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.495652 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.494895 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.496140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.496499 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.496582 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497394 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497546 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497541 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497590 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498012 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498071 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498079 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.497888 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498336 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498416 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498666 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.498971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.499120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.499583 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.500371 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.500544 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.00051455 +0000 UTC m=+85.259825771 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.500992 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.501136 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.501320 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.501345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.501792 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.501960 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.001928488 +0000 UTC m=+85.261239679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502065 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502159 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.502639 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.502695 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.002675182 +0000 UTC m=+85.261986343 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.502768 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.503010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.505954 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506063 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506497 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506512 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.506583 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.507170 5136 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.509128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.509214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.509566 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.515389 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.517272 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.517360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.518713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.519715 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.519797 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520147 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520199 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520196 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520220 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.520312 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.020282885 +0000 UTC m=+85.279594066 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520561 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520652 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.520675 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.521255 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522200 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522246 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522290 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522305 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.522669 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.523632 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.523884 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.523924 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.524071 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531037 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531074 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531090 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.531085 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.531187 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:53.031166432 +0000 UTC m=+85.290477683 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.532998 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533342 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533407 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534078 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534626 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.533810 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534732 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.534978 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.535075 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.535934 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536165 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536222 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536392 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536681 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.536881 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.537338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.537345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.539603 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.539682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.540148 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.540455 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541000 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541588 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541684 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541792 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.541791 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542066 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542343 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542600 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542911 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.542926 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543089 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543421 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543543 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543560 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.543699 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.544251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.544432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.544545 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.545298 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.545506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.545880 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.546232 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.546403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.549879 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.559686 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.566609 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.570045 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.575364 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.593947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594059 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594065 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594203 5136 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594241 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594251 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594347 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594367 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594381 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594396 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594409 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594421 5136 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594434 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594447 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594460 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594473 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594485 5136 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594497 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594510 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594522 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594534 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594548 5136 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594562 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594574 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594585 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594598 5136 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594612 5136 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594624 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594636 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594648 5136 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594659 5136 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594672 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594683 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594695 5136 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594706 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594719 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594731 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594742 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594755 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594768 5136 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594779 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594790 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594801 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594830 5136 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594843 5136 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594855 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594866 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594877 5136 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594888 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594901 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594915 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594967 5136 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594980 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.594992 5136 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595003 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595015 5136 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595027 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595040 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595053 5136 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595066 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595078 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595091 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595103 5136 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595142 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595154 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595166 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595179 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595191 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595203 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595216 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595229 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595243 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595256 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595270 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595283 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595294 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595306 5136 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595318 5136 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595330 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595341 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595354 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595365 5136 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595376 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595388 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595400 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595411 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595422 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595433 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595444 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595456 5136 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595466 5136 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595478 5136 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595490 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595503 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595516 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595528 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595539 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595552 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595563 5136 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595577 5136 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595588 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595600 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595611 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595623 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595636 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595648 5136 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595660 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595672 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595684 5136 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595696 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595709 5136 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595721 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595733 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595745 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595756 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595767 5136 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595779 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595790 5136 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595800 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595834 5136 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595847 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595860 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595871 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595882 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595894 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595905 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595917 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595928 5136 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595942 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595954 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595965 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595976 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595987 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.595999 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596009 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596019 5136 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596030 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596042 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596053 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596064 5136 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596076 5136 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596087 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596098 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596109 5136 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596120 5136 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596132 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596144 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596156 5136 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596167 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596179 5136 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596189 5136 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596201 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596213 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596225 5136 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596237 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596248 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.596260 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625686 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625759 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625772 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625789 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.625801 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.706527 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.720947 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:50:52 crc kubenswrapper[5136]: else Mar 20 06:50:52 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:50:52 crc kubenswrapper[5136]: exit 1 Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.722294 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.722304 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729070 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729101 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729147 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.729159 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.730376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Mar 20 06:50:52 crc kubenswrapper[5136]: W0320 06:50:52.734550 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050 WatchSource:0}: Error finding container fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050: Status 404 returned error can't find the container with id fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050 Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.738117 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.739312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:50:52 crc kubenswrapper[5136]: W0320 06:50:52.744712 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04 WatchSource:0}: Error finding container 38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04: Status 404 returned error can't find the container with id 38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04 Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.747529 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:50:52 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:50:52 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:50:52 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:50:52 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:50:52 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:50:52 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:50:52 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:50:52 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.749367 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:50:52 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.751230 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.761957 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"38c2f861a45bd75124962230c84212a9e761dffc74406bc887dc863b71c3dc04"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.762949 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fa291c4c5f41e65eb95b55917ede458ae6bd30c47ca6de7b21c34bb044812050"} Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.763409 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:50:52 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:50:52 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:50:52 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:50:52 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:50:52 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:50:52 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:50:52 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:50:52 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.764420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"b39a0006d55ebea3b75736ad7b0bd012e2e8b0bc106d0b1827465768a33096d9"} Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.764585 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.765270 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:50:52 crc kubenswrapper[5136]: set +o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: Mar 20 06:50:52 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:50:52 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:50:52 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:50:52 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.765576 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:50:52 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:50:52 crc kubenswrapper[5136]: set -o allexport Mar 20 06:50:52 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:50:52 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:50:52 crc kubenswrapper[5136]: else Mar 20 06:50:52 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:50:52 crc kubenswrapper[5136]: exit 1 Mar 20 06:50:52 crc kubenswrapper[5136]: fi Mar 20 06:50:52 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:50:52 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:50:52 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.765804 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.766793 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:50:52 crc kubenswrapper[5136]: E0320 06:50:52.766865 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.773035 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.786401 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.796443 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.813558 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.824992 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.831651 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.831842 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.831951 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.832035 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.832115 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.834549 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.847094 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.857169 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.872729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.886310 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.897774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.912780 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935198 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935210 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:52 crc kubenswrapper[5136]: I0320 06:50:52.935243 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:52Z","lastTransitionTime":"2026-03-20T06:50:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037220 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.037287 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101206 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101302 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.101279701 +0000 UTC m=+86.360590862 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101431 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101553 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101570 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101601 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.10159063 +0000 UTC m=+86.360901791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101634 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.101694 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101865 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101889 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101938 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101956 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.101972 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.101941012 +0000 UTC m=+86.361252213 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102065 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102112 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102196 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.10217101 +0000 UTC m=+86.361482261 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102197 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: E0320 06:50:53.102395 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:54.102375167 +0000 UTC m=+86.361686358 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.140342 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.140741 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.140975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.141192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.141387 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244536 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244548 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244565 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.244577 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347568 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347605 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347613 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347626 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.347634 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.450723 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451159 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451352 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451532 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.451703 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.553971 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555318 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555453 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.555592 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658652 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658696 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.658733 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761515 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.761605 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864686 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864696 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864713 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.864725 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967499 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967774 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967924 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:53 crc kubenswrapper[5136]: I0320 06:50:53.967985 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:53Z","lastTransitionTime":"2026-03-20T06:50:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071510 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071614 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071666 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.071687 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112038 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112195 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112230 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.112284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112496 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112523 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112542 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112554 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112596 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.112582148 +0000 UTC m=+88.371893299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112544 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112612 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112666 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.11264939 +0000 UTC m=+88.371960571 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112615 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112721 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.112709052 +0000 UTC m=+88.372020243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.112795 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.112785075 +0000 UTC m=+88.372096256 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.113209 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.113364 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:50:56.113341043 +0000 UTC m=+88.372652204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174540 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174592 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174603 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174621 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.174633 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277254 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277717 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.277850 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380552 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380655 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.380673 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.395922 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.396116 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.396298 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.396487 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.396362 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:54 crc kubenswrapper[5136]: E0320 06:50:54.396674 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.403130 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.404327 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.407385 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.408738 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.411156 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.412295 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.413567 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.415986 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.417977 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.420130 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.421132 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.422552 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.423565 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.424680 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.425729 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.426777 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.428099 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.428972 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.431116 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.432222 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.432855 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.433517 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.434629 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.435374 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.436344 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.437114 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.438258 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.438864 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.439987 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.440577 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.441149 5136 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.441720 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.444462 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.445679 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.446532 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.448880 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.450266 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.451425 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.454186 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.456456 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.457531 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.458973 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.461151 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.462424 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.463599 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.465064 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.466888 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.468515 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.470776 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.472418 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.474981 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.476423 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.477772 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.479680 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484419 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484477 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484495 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.484538 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588353 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588419 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588437 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.588483 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690852 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690882 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690892 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.690919 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793418 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.793641 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896162 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896221 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896240 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.896279 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998835 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:54 crc kubenswrapper[5136]: I0320 06:50:54.998872 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:54Z","lastTransitionTime":"2026-03-20T06:50:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.102654 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145450 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145495 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.145566 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.159948 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165522 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165547 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165617 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.165643 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.178316 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183403 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.183423 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.199420 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204414 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204462 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204483 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.204526 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.222048 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.227990 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228035 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228057 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.228109 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.244722 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.245138 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.247527 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.247735 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.247914 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.248097 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.248249 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354388 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354626 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354689 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354753 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.354837 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.442603 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.444012 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.444370 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457400 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457421 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457439 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.457449 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560716 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560865 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.560893 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663676 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663736 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663754 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663778 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.663795 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.766978 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767327 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767412 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767493 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.767562 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.772300 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:50:55 crc kubenswrapper[5136]: E0320 06:50:55.772686 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870872 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870893 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870919 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.870938 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973271 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973595 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:55 crc kubenswrapper[5136]: I0320 06:50:55.973805 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:55Z","lastTransitionTime":"2026-03-20T06:50:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.076713 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077117 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077258 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077438 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.077608 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128441 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128526 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128559 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.128616 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128671 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128639233 +0000 UTC m=+92.387950394 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128680 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128718 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128733 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128747 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128788 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128760177 +0000 UTC m=+92.388071368 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128804 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128843 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128803199 +0000 UTC m=+92.388114390 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128872 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.128861541 +0000 UTC m=+92.388172692 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128937 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.128999 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.129019 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.129117 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:00.129087168 +0000 UTC m=+92.388398379 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.181226 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.181528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.181859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.182025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.182154 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287318 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287648 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287799 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.287977 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.288101 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390743 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390855 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.390954 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.395737 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.395754 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.395761 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.396112 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.396172 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:56 crc kubenswrapper[5136]: E0320 06:50:56.396219 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.401568 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493124 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493374 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493488 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.493663 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595880 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595921 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595934 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.595945 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699336 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699354 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699376 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.699394 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802314 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802342 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802352 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802365 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.802375 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904897 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904956 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904968 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.904994 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:56 crc kubenswrapper[5136]: I0320 06:50:56.905037 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:56Z","lastTransitionTime":"2026-03-20T06:50:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.007991 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008056 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008080 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.008133 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111207 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111260 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111279 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111303 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.111321 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.214647 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215022 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.215467 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317744 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317800 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.317909 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.420946 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421010 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421033 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421063 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.421084 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524337 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524503 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524533 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.524554 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626801 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.626908 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730863 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730920 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.730949 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833671 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833709 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833722 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833740 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.833753 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936218 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936289 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:57 crc kubenswrapper[5136]: I0320 06:50:57.936316 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:57Z","lastTransitionTime":"2026-03-20T06:50:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039256 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.039280 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141536 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141575 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141584 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.141609 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245329 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245372 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245381 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245409 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.245421 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.348917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.348978 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.348994 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.349051 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.349091 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.396709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.396716 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.396957 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:50:58 crc kubenswrapper[5136]: E0320 06:50:58.397048 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:50:58 crc kubenswrapper[5136]: E0320 06:50:58.396844 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:50:58 crc kubenswrapper[5136]: E0320 06:50:58.397298 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.408979 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.419258 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.433109 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.445515 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452071 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452134 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452155 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.452208 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.456341 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.468964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.482943 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.500426 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554525 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554549 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.554567 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657909 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657922 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657940 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.657952 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760196 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760214 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.760228 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862480 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862544 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862596 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.862620 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965602 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:58 crc kubenswrapper[5136]: I0320 06:50:58.965628 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:58Z","lastTransitionTime":"2026-03-20T06:50:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068862 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068926 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068948 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.068997 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.171873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.171967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.171988 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.172015 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.172034 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274535 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274592 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274611 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.274654 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377154 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377196 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377207 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377224 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.377236 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480431 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480571 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480609 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.480662 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.583948 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584076 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.584099 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686864 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686902 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686929 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.686940 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789247 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789303 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789343 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.789361 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891629 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891703 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.891712 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995142 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:50:59 crc kubenswrapper[5136]: I0320 06:50:59.995248 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:50:59Z","lastTransitionTime":"2026-03-20T06:50:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097510 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097573 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097596 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.097646 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.167624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.167732 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.167964 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.167976 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168032 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.167992827 +0000 UTC m=+100.427304008 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.168105 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.168208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168211 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168176753 +0000 UTC m=+100.427487944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168231 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168332 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168300707 +0000 UTC m=+100.427611908 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168233 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168395 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168401 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168420 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168426 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168446 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168503 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168473793 +0000 UTC m=+100.427784974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.168533 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:08.168520495 +0000 UTC m=+100.427831686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200522 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200542 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.200555 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303493 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303559 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303583 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303621 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.303668 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.396000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.396082 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.396131 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.396000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.396230 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:00 crc kubenswrapper[5136]: E0320 06:51:00.396408 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405421 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405450 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405460 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405472 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.405483 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508938 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508965 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.508997 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.509018 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612563 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612582 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612604 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.612622 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715168 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.715200 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.817960 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818012 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818029 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818052 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.818071 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920518 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:00 crc kubenswrapper[5136]: I0320 06:51:00.920645 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:00Z","lastTransitionTime":"2026-03-20T06:51:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022374 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022446 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022489 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.022507 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124713 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124730 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.124743 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227700 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227766 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227810 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.227870 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330747 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330769 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330787 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.330795 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433652 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433720 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433741 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.433781 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536695 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536752 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.536861 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640361 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640399 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.640415 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742763 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742808 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742839 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.742869 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845512 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845572 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.845650 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948590 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948651 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948670 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948692 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:01 crc kubenswrapper[5136]: I0320 06:51:01.948709 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:01Z","lastTransitionTime":"2026-03-20T06:51:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052526 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052577 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.052607 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155471 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155532 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155550 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.155597 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.258959 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259031 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.259094 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380045 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380094 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.380153 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.396193 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.396356 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:02 crc kubenswrapper[5136]: E0320 06:51:02.396417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.396467 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:02 crc kubenswrapper[5136]: E0320 06:51:02.396867 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:02 crc kubenswrapper[5136]: E0320 06:51:02.396976 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483371 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483460 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483516 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483541 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.483561 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586810 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586931 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586959 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.586997 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.587022 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690127 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690150 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690177 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.690198 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792558 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792611 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792632 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792658 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.792680 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.895983 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896092 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.896205 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998358 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998396 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998410 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:02 crc kubenswrapper[5136]: I0320 06:51:02.998444 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:02Z","lastTransitionTime":"2026-03-20T06:51:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100732 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100808 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.100898 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203297 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203352 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203363 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.203383 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306387 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306451 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306472 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306503 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.306524 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409726 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409757 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.409777 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512527 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512552 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.512568 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615618 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615709 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615737 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.615758 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.718928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719002 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719051 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.719075 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821761 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821803 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821831 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821848 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.821859 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924453 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924530 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924553 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:03 crc kubenswrapper[5136]: I0320 06:51:03.924572 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:03Z","lastTransitionTime":"2026-03-20T06:51:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.026844 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027128 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027211 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027304 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.027387 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129336 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129384 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129396 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129414 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.129427 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231662 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231678 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231701 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.231716 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333402 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333425 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.333478 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.395980 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.396006 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:04 crc kubenswrapper[5136]: E0320 06:51:04.396135 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:04 crc kubenswrapper[5136]: E0320 06:51:04.396324 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.396440 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:04 crc kubenswrapper[5136]: E0320 06:51:04.396602 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435544 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435592 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435605 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435622 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.435635 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538151 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538159 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538171 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.538185 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640244 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640296 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640319 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.640329 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.742967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743008 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743031 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.743041 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845123 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845154 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.845181 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947579 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947662 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:04 crc kubenswrapper[5136]: I0320 06:51:04.947710 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:04Z","lastTransitionTime":"2026-03-20T06:51:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050533 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.050654 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153566 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153597 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.153619 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.256021 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.256449 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.256803 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.257170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.257419 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360052 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360162 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.360180 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.398770 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.398974 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:05 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:05 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:05 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:51:05 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:51:05 crc kubenswrapper[5136]: else Mar 20 06:51:05 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:51:05 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:05 crc kubenswrapper[5136]: fi Mar 20 06:51:05 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:51:05 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:05 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.399995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.400055 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463143 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463245 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.463298 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.566875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.566993 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.567019 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.567049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.567070 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.574640 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.574692 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.575079 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.575125 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.575151 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.604215 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612348 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612408 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.612477 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.643252 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647211 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647273 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647338 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.647412 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.663062 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667618 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667668 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667683 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.667714 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.678015 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.681743 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.681904 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.682200 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.682308 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.682389 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.691026 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:05 crc kubenswrapper[5136]: E0320 06:51:05.691184 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692542 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692696 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692753 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.692830 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795028 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795258 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.795376 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897502 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897548 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897560 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897576 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:05 crc kubenswrapper[5136]: I0320 06:51:05.897587 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:05Z","lastTransitionTime":"2026-03-20T06:51:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000532 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000587 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000630 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.000648 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103739 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.103761 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207248 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207632 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207771 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.207974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.208115 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.310439 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311043 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311246 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311477 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.311685 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.396543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.396626 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.396716 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.396730 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.396899 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.397027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.399280 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:06 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:06 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:06 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:06 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: fi Mar 20 06:51:06 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:51:06 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:51:06 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:51:06 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:51:06 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:51:06 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:51:06 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:06 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:51:06 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:51:06 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:51:06 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:51:06 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:51:06 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:51:06 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:06 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:06 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.402457 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:06 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:06 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:06 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:06 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:06 crc kubenswrapper[5136]: fi Mar 20 06:51:06 crc kubenswrapper[5136]: Mar 20 06:51:06 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:51:06 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:06 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:51:06 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:51:06 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:06 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:06 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:06 crc kubenswrapper[5136]: E0320 06:51:06.404468 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.414663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.414986 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.415179 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.415457 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.415753 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.518432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.518911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.519124 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.519330 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.519524 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622654 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622760 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.622781 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725455 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725508 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725529 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.725567 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828789 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828894 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828938 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.828956 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932354 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932427 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932444 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:06 crc kubenswrapper[5136]: I0320 06:51:06.932493 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:06Z","lastTransitionTime":"2026-03-20T06:51:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035861 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035935 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035953 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035981 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.035998 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139755 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139865 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139922 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.139946 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245372 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245410 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245422 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245438 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.245449 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347871 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347937 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347955 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347977 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.347995 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450499 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450545 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450557 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450572 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.450582 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553572 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553679 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.553697 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657407 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657431 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657462 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.657485 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761005 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761060 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761075 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761096 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.761112 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863593 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863646 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863662 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.863673 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967332 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:07 crc kubenswrapper[5136]: I0320 06:51:07.967395 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:07Z","lastTransitionTime":"2026-03-20T06:51:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.069801 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.070704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.070926 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.071117 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.071306 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174301 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174360 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174402 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.174422 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242857 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.242995 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243057 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243017573 +0000 UTC m=+116.502328764 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243094 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243109 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243100 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243209 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243244 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.24320719 +0000 UTC m=+116.502518381 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243120 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243273 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243258882 +0000 UTC m=+116.502570073 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243300 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243363 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243391 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243311 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243297943 +0000 UTC m=+116.502609134 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.243501 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:24.243466699 +0000 UTC m=+116.502777900 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.277928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.277987 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.278010 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.278041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.278066 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380467 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380520 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380536 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380559 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.380575 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.395899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.395871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.396010 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.396417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.396634 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.397036 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.397312 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:51:08 crc kubenswrapper[5136]: E0320 06:51:08.397536 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.412001 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.430654 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.441588 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.454097 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.467435 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.479453 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483363 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483550 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483701 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.483887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.484042 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.496160 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.506163 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587223 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587294 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.587319 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691116 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691172 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691213 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.691230 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.793930 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794003 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794056 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.794078 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901389 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901485 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:08 crc kubenswrapper[5136]: I0320 06:51:08.901531 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:08Z","lastTransitionTime":"2026-03-20T06:51:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004659 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.004702 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107410 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107490 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107518 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.107575 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209892 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209909 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.209921 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312764 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312792 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.312803 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416288 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416358 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416380 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416413 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.416437 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520075 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520153 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520194 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.520266 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623633 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623702 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623725 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623754 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.623777 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727300 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727353 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.727371 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830347 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830413 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830436 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.830488 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933735 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933799 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933852 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933884 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:09 crc kubenswrapper[5136]: I0320 06:51:09.933905 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:09Z","lastTransitionTime":"2026-03-20T06:51:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036631 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036687 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.036709 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.139917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.139990 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.140013 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.140044 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.140066 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243026 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243070 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243081 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243098 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.243110 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345751 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345811 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345867 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.345904 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.395888 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.395917 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.395895 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:10 crc kubenswrapper[5136]: E0320 06:51:10.396112 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:10 crc kubenswrapper[5136]: E0320 06:51:10.396262 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:10 crc kubenswrapper[5136]: E0320 06:51:10.396411 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448412 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448541 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.448564 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551324 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.551400 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654679 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654739 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654780 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.654798 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757676 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757688 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.757724 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.859690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860123 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.860574 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964320 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964339 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964363 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:10 crc kubenswrapper[5136]: I0320 06:51:10.964381 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:10Z","lastTransitionTime":"2026-03-20T06:51:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068087 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068110 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068137 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.068180 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171250 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171319 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171341 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171375 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.171397 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274433 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274757 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.274970 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.275062 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.377737 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378225 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378479 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378723 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.378951 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481435 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481497 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481514 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481537 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.481554 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583959 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.583995 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.584010 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.686935 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.686977 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.686989 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.687006 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.687018 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790235 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790623 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790770 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.790961 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.791175 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.893435 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.893691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.893863 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.894020 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.894151 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.996939 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997005 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997045 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:11 crc kubenswrapper[5136]: I0320 06:51:11.997063 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:11Z","lastTransitionTime":"2026-03-20T06:51:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100252 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100314 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100333 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100355 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.100371 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203499 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203547 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203588 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.203605 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307435 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307481 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307512 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307531 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.307543 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.395637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.395749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:12 crc kubenswrapper[5136]: E0320 06:51:12.395796 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.395639 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:12 crc kubenswrapper[5136]: E0320 06:51:12.396546 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:12 crc kubenswrapper[5136]: E0320 06:51:12.396546 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408833 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408874 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408885 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.408895 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512383 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.512961 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616266 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616569 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616595 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616619 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.616636 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.719951 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720114 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.720154 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822182 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822247 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822270 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.822287 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926148 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926166 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:12 crc kubenswrapper[5136]: I0320 06:51:12.926692 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:12Z","lastTransitionTime":"2026-03-20T06:51:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030078 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030394 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030596 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.030687 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.133272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.133672 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.133929 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.134136 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.134362 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238026 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238095 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.238152 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341050 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341153 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.341196 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443492 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443500 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443512 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.443521 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545869 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545923 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545938 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.545948 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648153 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648244 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.648253 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750902 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750954 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750973 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.750996 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.751013 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.852975 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853221 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853286 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853368 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.853457 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956232 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956243 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956259 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:13 crc kubenswrapper[5136]: I0320 06:51:13.956270 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:13Z","lastTransitionTime":"2026-03-20T06:51:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059176 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059217 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059226 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059240 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.059251 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161443 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161482 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161494 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.161519 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263530 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263542 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.263573 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366093 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366157 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366212 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.366246 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.395701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:14 crc kubenswrapper[5136]: E0320 06:51:14.395850 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.395963 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.396099 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:14 crc kubenswrapper[5136]: E0320 06:51:14.396269 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:14 crc kubenswrapper[5136]: E0320 06:51:14.396384 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469389 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469490 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469516 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469540 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.469557 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572685 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572723 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572736 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.572746 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675423 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675485 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675505 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.675546 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778734 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.778805 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882104 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882807 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.882983 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986298 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986307 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986321 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:14 crc kubenswrapper[5136]: I0320 06:51:14.986331 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:14Z","lastTransitionTime":"2026-03-20T06:51:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.090194 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.090564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.090927 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.091239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.091530 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.128465 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-pt4jb"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.129436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.132880 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.133791 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.137227 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.149595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.167293 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.181806 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.195710 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196074 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196403 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.196596 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.200292 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.206293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a27959f-3f41-4683-87d6-7b2a9210d634-hosts-file\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.206546 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njj48\" (UniqueName: \"kubernetes.io/projected/4a27959f-3f41-4683-87d6-7b2a9210d634-kube-api-access-njj48\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.212965 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.229489 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.243167 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.259869 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.271375 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300383 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.300660 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.307920 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a27959f-3f41-4683-87d6-7b2a9210d634-hosts-file\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.308153 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njj48\" (UniqueName: \"kubernetes.io/projected/4a27959f-3f41-4683-87d6-7b2a9210d634-kube-api-access-njj48\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.308170 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4a27959f-3f41-4683-87d6-7b2a9210d634-hosts-file\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.339005 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njj48\" (UniqueName: \"kubernetes.io/projected/4a27959f-3f41-4683-87d6-7b2a9210d634-kube-api-access-njj48\") pod \"node-resolver-pt4jb\" (UID: \"4a27959f-3f41-4683-87d6-7b2a9210d634\") " pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403803 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403900 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403929 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403960 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.403983 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.456449 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-pt4jb" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.486389 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:15 crc kubenswrapper[5136]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:15 crc kubenswrapper[5136]: set -uo pipefail Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 06:51:15 crc kubenswrapper[5136]: HOSTS_FILE="/etc/hosts" Mar 20 06:51:15 crc kubenswrapper[5136]: TEMP_FILE="/etc/hosts.tmp" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Make a temporary file with the old hosts file's attributes. Mar 20 06:51:15 crc kubenswrapper[5136]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: echo "Failed to preserve hosts file. Exiting." Mar 20 06:51:15 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: while true; do Mar 20 06:51:15 crc kubenswrapper[5136]: declare -A svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${services[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 06:51:15 crc kubenswrapper[5136]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 06:51:15 crc kubenswrapper[5136]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 06:51:15 crc kubenswrapper[5136]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 06:51:15 crc kubenswrapper[5136]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 06:51:15 crc kubenswrapper[5136]: for i in ${!cmds[*]} Mar 20 06:51:15 crc kubenswrapper[5136]: do Mar 20 06:51:15 crc kubenswrapper[5136]: ips=($(eval "${cmds[i]}")) Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: svc_ips["${svc}"]="${ips[@]}" Mar 20 06:51:15 crc kubenswrapper[5136]: break Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Update /etc/hosts only if we get valid service IPs Mar 20 06:51:15 crc kubenswrapper[5136]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 06:51:15 crc kubenswrapper[5136]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 06:51:15 crc kubenswrapper[5136]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Append resolver entries for services Mar 20 06:51:15 crc kubenswrapper[5136]: rc=0 Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${!svc_ips[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: for ip in ${svc_ips[${svc}]}; do Mar 20 06:51:15 crc kubenswrapper[5136]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ $rc -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 06:51:15 crc kubenswrapper[5136]: # Replace /etc/hosts with our modified version if needed Mar 20 06:51:15 crc kubenswrapper[5136]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 06:51:15 crc kubenswrapper[5136]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: unset svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njj48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-pt4jb_openshift-dns(4a27959f-3f41-4683-87d6-7b2a9210d634): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:15 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.489616 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-pt4jb" podUID="4a27959f-3f41-4683-87d6-7b2a9210d634" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.494238 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-jst28"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.494616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.497143 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.497414 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.497483 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.499149 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.499512 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.501326 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-dbsfs"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.507690 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-tjpps"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508176 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508953 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.508976 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.509012 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.509032 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.509890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-rootfs\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94xgp\" (UniqueName: \"kubernetes.io/projected/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-kube-api-access-94xgp\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-mcd-auth-proxy-config\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-proxy-tls\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.510846 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.511114 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.512044 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.512332 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.513647 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.513928 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.514122 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.514459 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.515700 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.531135 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.543021 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.563585 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.573896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.585749 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.596056 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.605189 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610682 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-os-release\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610733 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlfxh\" (UniqueName: \"kubernetes.io/projected/263c5427-a835-40c6-93cb-4bb66a83ea5b-kube-api-access-dlfxh\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610777 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-rootfs\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610842 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-bin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-rootfs\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.610955 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-k8s-cni-cncf-io\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94xgp\" (UniqueName: \"kubernetes.io/projected/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-kube-api-access-94xgp\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-cnibin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611184 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-daemon-config\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611410 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-cnibin\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611521 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-multus-certs\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-cni-binary-copy\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611711 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-os-release\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611749 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611788 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611807 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611868 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611762 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-proxy-tls\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611888 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.611975 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612042 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-hostroot\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612097 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612156 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-socket-dir-parent\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612208 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612243 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wd92w\" (UniqueName: \"kubernetes.io/projected/059eafe0-4e83-486d-b958-992b00aa0878-kube-api-access-wd92w\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612279 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-conf-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612312 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-system-cni-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-system-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-kubelet\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612453 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-etc-kubernetes\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612489 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612555 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-multus\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612607 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-mcd-auth-proxy-config\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.612649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-netns\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.613358 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.614115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-mcd-auth-proxy-config\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.618182 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-proxy-tls\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.621556 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.632380 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.639991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94xgp\" (UniqueName: \"kubernetes.io/projected/f64ebce8-37f2-4631-9b8b-d34ebc9b93ba-kube-api-access-94xgp\") pod \"machine-config-daemon-jst28\" (UID: \"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\") " pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.646335 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.658692 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.670266 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.680838 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.689127 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.704747 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-multus-certs\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-cnibin\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713347 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713375 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-cni-binary-copy\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713399 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-os-release\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713400 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-multus-certs\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713423 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-hostroot\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713557 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713539 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-os-release\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713593 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-cnibin\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713469 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-hostroot\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-socket-dir-parent\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713694 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-conf-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713848 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-conf-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713809 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-system-cni-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713724 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-socket-dir-parent\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.713907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-system-cni-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714214 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wd92w\" (UniqueName: \"kubernetes.io/projected/059eafe0-4e83-486d-b958-992b00aa0878-kube-api-access-wd92w\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714245 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-system-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714262 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-kubelet\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714276 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-etc-kubernetes\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714301 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-multus\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714346 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-netns\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714364 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-os-release\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlfxh\" (UniqueName: \"kubernetes.io/projected/263c5427-a835-40c6-93cb-4bb66a83ea5b-kube-api-access-dlfxh\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-bin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714417 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-k8s-cni-cncf-io\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714444 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-cnibin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-daemon-config\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714564 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/059eafe0-4e83-486d-b958-992b00aa0878-tuning-conf-dir\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714725 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-system-cni-dir\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714858 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-etc-kubernetes\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714888 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-multus\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714896 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-kubelet\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-k8s-cni-cncf-io\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714926 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-run-netns\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714962 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-host-var-lib-cni-bin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714973 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-cnibin\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.714973 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-cni-binary-copy\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/263c5427-a835-40c6-93cb-4bb66a83ea5b-os-release\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715711 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715957 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.715981 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/059eafe0-4e83-486d-b958-992b00aa0878-cni-binary-copy\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.716018 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.716034 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.716512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/263c5427-a835-40c6-93cb-4bb66a83ea5b-multus-daemon-config\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727717 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727805 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727869 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727895 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.727944 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.728579 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.733928 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlfxh\" (UniqueName: \"kubernetes.io/projected/263c5427-a835-40c6-93cb-4bb66a83ea5b-kube-api-access-dlfxh\") pod \"multus-tjpps\" (UID: \"263c5427-a835-40c6-93cb-4bb66a83ea5b\") " pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.741874 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.741859 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.743937 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wd92w\" (UniqueName: \"kubernetes.io/projected/059eafe0-4e83-486d-b958-992b00aa0878-kube-api-access-wd92w\") pod \"multus-additional-cni-plugins-dbsfs\" (UID: \"059eafe0-4e83-486d-b958-992b00aa0878\") " pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747235 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747253 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747277 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.747294 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.756897 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.757223 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761670 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761681 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761697 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.761708 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.770184 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.772692 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.775923 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.775994 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.776025 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.776061 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.776084 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.785248 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789018 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789255 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.789323 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.796844 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.796993 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818202 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818234 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.818762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pt4jb" event={"ID":"4a27959f-3f41-4683-87d6-7b2a9210d634","Type":"ContainerStarted","Data":"83d9ea6f9a6599f452980d42ef9dc9d13a2ed55fa322cb6c3d5f855143803506"} Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.820250 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:15 crc kubenswrapper[5136]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:15 crc kubenswrapper[5136]: set -uo pipefail Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Mar 20 06:51:15 crc kubenswrapper[5136]: HOSTS_FILE="/etc/hosts" Mar 20 06:51:15 crc kubenswrapper[5136]: TEMP_FILE="/etc/hosts.tmp" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: IFS=', ' read -r -a services <<< "${SERVICES}" Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Make a temporary file with the old hosts file's attributes. Mar 20 06:51:15 crc kubenswrapper[5136]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: echo "Failed to preserve hosts file. Exiting." Mar 20 06:51:15 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: while true; do Mar 20 06:51:15 crc kubenswrapper[5136]: declare -A svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${services[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: # Fetch service IP from cluster dns if present. We make several tries Mar 20 06:51:15 crc kubenswrapper[5136]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Mar 20 06:51:15 crc kubenswrapper[5136]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Mar 20 06:51:15 crc kubenswrapper[5136]: # support UDP loadbalancers and require reaching DNS through TCP. Mar 20 06:51:15 crc kubenswrapper[5136]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Mar 20 06:51:15 crc kubenswrapper[5136]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Mar 20 06:51:15 crc kubenswrapper[5136]: for i in ${!cmds[*]} Mar 20 06:51:15 crc kubenswrapper[5136]: do Mar 20 06:51:15 crc kubenswrapper[5136]: ips=($(eval "${cmds[i]}")) Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: svc_ips["${svc}"]="${ips[@]}" Mar 20 06:51:15 crc kubenswrapper[5136]: break Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Update /etc/hosts only if we get valid service IPs Mar 20 06:51:15 crc kubenswrapper[5136]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Mar 20 06:51:15 crc kubenswrapper[5136]: # Stale entries could exist in /etc/hosts if the service is deleted Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ -n "${svc_ips[*]-}" ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Mar 20 06:51:15 crc kubenswrapper[5136]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Mar 20 06:51:15 crc kubenswrapper[5136]: # Only continue rebuilding the hosts entries if its original content is preserved Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # Append resolver entries for services Mar 20 06:51:15 crc kubenswrapper[5136]: rc=0 Mar 20 06:51:15 crc kubenswrapper[5136]: for svc in "${!svc_ips[@]}"; do Mar 20 06:51:15 crc kubenswrapper[5136]: for ip in ${svc_ips[${svc}]}; do Mar 20 06:51:15 crc kubenswrapper[5136]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: if [[ $rc -ne 0 ]]; then Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: continue Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: Mar 20 06:51:15 crc kubenswrapper[5136]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Mar 20 06:51:15 crc kubenswrapper[5136]: # Replace /etc/hosts with our modified version if needed Mar 20 06:51:15 crc kubenswrapper[5136]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Mar 20 06:51:15 crc kubenswrapper[5136]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Mar 20 06:51:15 crc kubenswrapper[5136]: fi Mar 20 06:51:15 crc kubenswrapper[5136]: sleep 60 & wait Mar 20 06:51:15 crc kubenswrapper[5136]: unset svc_ips Mar 20 06:51:15 crc kubenswrapper[5136]: done Mar 20 06:51:15 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-njj48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-pt4jb_openshift-dns(4a27959f-3f41-4683-87d6-7b2a9210d634): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:15 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.820987 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.821353 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-pt4jb" podUID="4a27959f-3f41-4683-87d6-7b2a9210d634" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.829240 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-tjpps" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.832107 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: W0320 06:51:15.834930 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf64ebce8_37f2_4631_9b8b_d34ebc9b93ba.slice/crio-075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7 WatchSource:0}: Error finding container 075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7: Status 404 returned error can't find the container with id 075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7 Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.835703 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.837719 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: W0320 06:51:15.838630 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod263c5427_a835_40c6_93cb_4bb66a83ea5b.slice/crio-18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5 WatchSource:0}: Error finding container 18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5: Status 404 returned error can't find the container with id 18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5 Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.842214 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.843417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.843742 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.843952 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:15 crc kubenswrapper[5136]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 06:51:15 crc kubenswrapper[5136]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 06:51:15 crc kubenswrapper[5136]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-tjpps_openshift-multus(263c5427-a835-40c6-93cb-4bb66a83ea5b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:15 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.845511 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-tjpps" podUID="263c5427-a835-40c6-93cb-4bb66a83ea5b" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.846775 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.847880 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: W0320 06:51:15.849727 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod059eafe0_4e83_486d_b958_992b00aa0878.slice/crio-0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4 WatchSource:0}: Error finding container 0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4: Status 404 returned error can't find the container with id 0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4 Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851023 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851300 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851363 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851312 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851567 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851571 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.851898 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.852404 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wd92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-dbsfs_openshift-multus(059eafe0-4e83-486d-b958-992b00aa0878): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:15 crc kubenswrapper[5136]: E0320 06:51:15.853546 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" podUID="059eafe0-4e83-486d-b958-992b00aa0878" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.866015 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.875953 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.883726 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.894331 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.903791 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.913615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916074 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916110 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916195 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916254 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916275 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916325 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916390 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916412 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916433 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916517 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916581 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916646 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916682 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916772 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.916850 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921451 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921703 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921744 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.921764 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:15Z","lastTransitionTime":"2026-03-20T06:51:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.930772 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.944305 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.955731 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.968917 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.975784 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:15 crc kubenswrapper[5136]: I0320 06:51:15.995157 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.005970 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017919 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.017989 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018048 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018066 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018066 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018084 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018184 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018244 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018271 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018347 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018414 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018462 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018483 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018569 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018702 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018714 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018736 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018757 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018768 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018783 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018838 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018840 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.018874 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019015 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019064 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019085 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019106 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019128 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.019610 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.020984 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.023701 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025314 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025350 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025360 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025377 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.025387 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.036392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"ovnkube-node-nbmbh\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.039115 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.047300 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.058715 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.066627 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.078620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.089896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.105410 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128361 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128426 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128443 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128467 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.128481 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.165938 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:16 crc kubenswrapper[5136]: W0320 06:51:16.186083 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod963bf1ca_b871_4cad_a1fc_cf829a70a81a.slice/crio-fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e WatchSource:0}: Error finding container fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e: Status 404 returned error can't find the container with id fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.189596 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:16 crc kubenswrapper[5136]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 06:51:16 crc kubenswrapper[5136]: apiVersion: v1 Mar 20 06:51:16 crc kubenswrapper[5136]: clusters: Mar 20 06:51:16 crc kubenswrapper[5136]: - cluster: Mar 20 06:51:16 crc kubenswrapper[5136]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 06:51:16 crc kubenswrapper[5136]: server: https://api-int.crc.testing:6443 Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: contexts: Mar 20 06:51:16 crc kubenswrapper[5136]: - context: Mar 20 06:51:16 crc kubenswrapper[5136]: cluster: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: namespace: default Mar 20 06:51:16 crc kubenswrapper[5136]: user: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: current-context: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: kind: Config Mar 20 06:51:16 crc kubenswrapper[5136]: preferences: {} Mar 20 06:51:16 crc kubenswrapper[5136]: users: Mar 20 06:51:16 crc kubenswrapper[5136]: - name: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: user: Mar 20 06:51:16 crc kubenswrapper[5136]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: EOF Mar 20 06:51:16 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrnqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:16 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.190896 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232217 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232545 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232700 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.232909 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.233126 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336552 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336607 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336623 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336648 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.336665 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.396652 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.396746 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.396885 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.396895 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.397034 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.397214 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.438978 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439040 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439064 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439095 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.439120 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541139 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541233 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541269 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.541291 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644883 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644924 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644961 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.644980 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747656 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747733 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.747745 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.822690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"18004b385b788eb1cc3a9afac0160c58ea75a8e7f77ca5f5520deed36cd9c1b5"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.824236 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"075a96b005188740a40783675493adaee0253fb7bd1c86fd69929f3b319276f7"} Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.825573 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:16 crc kubenswrapper[5136]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Mar 20 06:51:16 crc kubenswrapper[5136]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Mar 20 06:51:16 crc kubenswrapper[5136]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlfxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-tjpps_openshift-multus(263c5427-a835-40c6-93cb-4bb66a83ea5b): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:16 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.825998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e"} Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.826372 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.826750 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-tjpps" podUID="263c5427-a835-40c6-93cb-4bb66a83ea5b" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.827530 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerStarted","Data":"0dbfbfc98637ffc99d695ec77ba68c86041d535157c89bb27cf3986d0c5ba8d4"} Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.828532 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:16 crc kubenswrapper[5136]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Mar 20 06:51:16 crc kubenswrapper[5136]: apiVersion: v1 Mar 20 06:51:16 crc kubenswrapper[5136]: clusters: Mar 20 06:51:16 crc kubenswrapper[5136]: - cluster: Mar 20 06:51:16 crc kubenswrapper[5136]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Mar 20 06:51:16 crc kubenswrapper[5136]: server: https://api-int.crc.testing:6443 Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: contexts: Mar 20 06:51:16 crc kubenswrapper[5136]: - context: Mar 20 06:51:16 crc kubenswrapper[5136]: cluster: default-cluster Mar 20 06:51:16 crc kubenswrapper[5136]: namespace: default Mar 20 06:51:16 crc kubenswrapper[5136]: user: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: name: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: current-context: default-context Mar 20 06:51:16 crc kubenswrapper[5136]: kind: Config Mar 20 06:51:16 crc kubenswrapper[5136]: preferences: {} Mar 20 06:51:16 crc kubenswrapper[5136]: users: Mar 20 06:51:16 crc kubenswrapper[5136]: - name: default-auth Mar 20 06:51:16 crc kubenswrapper[5136]: user: Mar 20 06:51:16 crc kubenswrapper[5136]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Mar 20 06:51:16 crc kubenswrapper[5136]: EOF Mar 20 06:51:16 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrnqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:16 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.829120 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-94xgp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.829567 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wd92w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-dbsfs_openshift-multus(059eafe0-4e83-486d-b958-992b00aa0878): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.831216 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" podUID="059eafe0-4e83-486d-b958-992b00aa0878" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.831246 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:16 crc kubenswrapper[5136]: E0320 06:51:16.832510 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.843297 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850490 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850539 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850556 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.850568 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.862993 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.878868 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.888986 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.908794 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.931161 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.946789 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954138 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954195 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954213 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.954257 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:16Z","lastTransitionTime":"2026-03-20T06:51:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.974036 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:16 crc kubenswrapper[5136]: I0320 06:51:16.990178 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.007683 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.024772 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.046932 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057225 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057265 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.057323 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.062675 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.080337 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.094336 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.118603 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.131965 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.145860 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161090 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161126 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161138 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161156 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.161170 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.165296 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.176738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.190228 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.206284 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.222351 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.237532 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.249586 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264521 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264583 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.264668 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.269295 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367564 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367620 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367661 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.367679 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: E0320 06:51:17.398012 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:17 crc kubenswrapper[5136]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Mar 20 06:51:17 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:17 crc kubenswrapper[5136]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Mar 20 06:51:17 crc kubenswrapper[5136]: source /etc/kubernetes/apiserver-url.env Mar 20 06:51:17 crc kubenswrapper[5136]: else Mar 20 06:51:17 crc kubenswrapper[5136]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Mar 20 06:51:17 crc kubenswrapper[5136]: exit 1 Mar 20 06:51:17 crc kubenswrapper[5136]: fi Mar 20 06:51:17 crc kubenswrapper[5136]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Mar 20 06:51:17 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:17 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:17 crc kubenswrapper[5136]: E0320 06:51:17.399199 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470650 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470722 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470747 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.470785 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574392 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574441 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574458 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574480 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.574496 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677405 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677469 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.677527 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780081 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780092 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.780122 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882377 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882436 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882454 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882479 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.882498 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985183 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985242 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985260 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:17 crc kubenswrapper[5136]: I0320 06:51:17.985306 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:17Z","lastTransitionTime":"2026-03-20T06:51:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087842 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.087866 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191732 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191799 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191847 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.191890 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294754 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294855 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294899 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.294916 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.396516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.396551 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.397039 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.397106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.397274 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.397368 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.399478 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:18 crc kubenswrapper[5136]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:18 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:18 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:18 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: fi Mar 20 06:51:18 crc kubenswrapper[5136]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Mar 20 06:51:18 crc kubenswrapper[5136]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Mar 20 06:51:18 crc kubenswrapper[5136]: ho_enable="--enable-hybrid-overlay" Mar 20 06:51:18 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Mar 20 06:51:18 crc kubenswrapper[5136]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Mar 20 06:51:18 crc kubenswrapper[5136]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Mar 20 06:51:18 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:18 crc kubenswrapper[5136]: --webhook-cert-dir="/etc/webhook-cert" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --webhook-host=127.0.0.1 \ Mar 20 06:51:18 crc kubenswrapper[5136]: --webhook-port=9743 \ Mar 20 06:51:18 crc kubenswrapper[5136]: ${ho_enable} \ Mar 20 06:51:18 crc kubenswrapper[5136]: --enable-interconnect \ Mar 20 06:51:18 crc kubenswrapper[5136]: --disable-approver \ Mar 20 06:51:18 crc kubenswrapper[5136]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --wait-for-kubernetes-api=200s \ Mar 20 06:51:18 crc kubenswrapper[5136]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:18 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:18 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400554 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400579 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400606 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.400627 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.402331 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:18 crc kubenswrapper[5136]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Mar 20 06:51:18 crc kubenswrapper[5136]: if [[ -f "/env/_master" ]]; then Mar 20 06:51:18 crc kubenswrapper[5136]: set -o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: source "/env/_master" Mar 20 06:51:18 crc kubenswrapper[5136]: set +o allexport Mar 20 06:51:18 crc kubenswrapper[5136]: fi Mar 20 06:51:18 crc kubenswrapper[5136]: Mar 20 06:51:18 crc kubenswrapper[5136]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Mar 20 06:51:18 crc kubenswrapper[5136]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Mar 20 06:51:18 crc kubenswrapper[5136]: --disable-webhook \ Mar 20 06:51:18 crc kubenswrapper[5136]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Mar 20 06:51:18 crc kubenswrapper[5136]: --loglevel="${LOGLEVEL}" Mar 20 06:51:18 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:18 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:18 crc kubenswrapper[5136]: E0320 06:51:18.403540 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.413186 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.425672 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.440097 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.450991 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.466252 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.474729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.486218 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.503870 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504200 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504416 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504581 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.504743 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.515676 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.531566 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.548364 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.560156 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.570330 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.581896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607489 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607534 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.607552 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709848 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709910 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709953 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.709971 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812693 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812718 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.812735 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914943 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914963 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:18 crc kubenswrapper[5136]: I0320 06:51:18.914978 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:18Z","lastTransitionTime":"2026-03-20T06:51:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018737 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018762 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.018777 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120763 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120841 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.120854 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222761 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222802 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222836 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.222848 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326149 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326216 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326266 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.326289 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.398560 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:51:19 crc kubenswrapper[5136]: E0320 06:51:19.399089 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Mar 20 06:51:19 crc kubenswrapper[5136]: E0320 06:51:19.399495 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Mar 20 06:51:19 crc kubenswrapper[5136]: E0320 06:51:19.400630 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428732 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428791 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428855 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428889 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.428912 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531901 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.531925 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634832 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634902 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634916 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634936 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.634953 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738853 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738921 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738940 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.738967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.739026 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.841991 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842055 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842099 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.842118 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.945661 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.946272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.946417 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.946556 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:19 crc kubenswrapper[5136]: I0320 06:51:19.947147 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:19Z","lastTransitionTime":"2026-03-20T06:51:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051106 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051163 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051193 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.051204 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154326 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154429 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154452 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154487 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.154510 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258230 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258320 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.258333 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.361422 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.361789 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.361987 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.362133 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.362270 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.396003 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.396201 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:20 crc kubenswrapper[5136]: E0320 06:51:20.396381 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.396438 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:20 crc kubenswrapper[5136]: E0320 06:51:20.396622 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:20 crc kubenswrapper[5136]: E0320 06:51:20.396879 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466162 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466181 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466214 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.466235 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569791 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569877 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.569900 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673188 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673263 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673285 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673313 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.673331 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776859 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776925 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776946 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.776959 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879670 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879706 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879730 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.879740 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983192 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:20 crc kubenswrapper[5136]: I0320 06:51:20.983220 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:20Z","lastTransitionTime":"2026-03-20T06:51:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.085997 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086056 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086073 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086098 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.086114 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188519 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188589 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188614 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188658 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.188687 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291197 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291257 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291271 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291290 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.291300 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393790 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393940 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.393965 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495849 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495880 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495891 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495906 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.495917 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597707 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597776 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597800 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597869 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.597891 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.700894 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.700954 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.700974 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.701007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.701029 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.715624 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-g5hkc"] Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.716154 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.719107 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.719501 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.721058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.721139 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.741019 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.751154 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.767085 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.782902 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.784357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9076e831-6703-4014-9b7d-eb438a0b62f3-host\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.784432 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9076e831-6703-4014-9b7d-eb438a0b62f3-serviceca\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.784692 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbpcd\" (UniqueName: \"kubernetes.io/projected/9076e831-6703-4014-9b7d-eb438a0b62f3-kube-api-access-hbpcd\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.796052 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804046 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804109 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804135 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804165 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.804189 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.812950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.829414 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.843940 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.868125 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.879661 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885589 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hbpcd\" (UniqueName: \"kubernetes.io/projected/9076e831-6703-4014-9b7d-eb438a0b62f3-kube-api-access-hbpcd\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885645 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9076e831-6703-4014-9b7d-eb438a0b62f3-host\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9076e831-6703-4014-9b7d-eb438a0b62f3-serviceca\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.885889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9076e831-6703-4014-9b7d-eb438a0b62f3-host\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.886776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/9076e831-6703-4014-9b7d-eb438a0b62f3-serviceca\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.894944 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.906478 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hbpcd\" (UniqueName: \"kubernetes.io/projected/9076e831-6703-4014-9b7d-eb438a0b62f3-kube-api-access-hbpcd\") pod \"node-ca-g5hkc\" (UID: \"9076e831-6703-4014-9b7d-eb438a0b62f3\") " pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907118 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907152 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907164 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907181 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907192 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:21Z","lastTransitionTime":"2026-03-20T06:51:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.907534 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.921108 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:21 crc kubenswrapper[5136]: I0320 06:51:21.931047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009224 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009292 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009310 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009335 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.009352 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.037445 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-g5hkc" Mar 20 06:51:22 crc kubenswrapper[5136]: W0320 06:51:22.051487 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9076e831_6703_4014_9b7d_eb438a0b62f3.slice/crio-037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278 WatchSource:0}: Error finding container 037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278: Status 404 returned error can't find the container with id 037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278 Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.054647 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:22 crc kubenswrapper[5136]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 06:51:22 crc kubenswrapper[5136]: while [ true ]; Mar 20 06:51:22 crc kubenswrapper[5136]: do Mar 20 06:51:22 crc kubenswrapper[5136]: for f in $(ls /tmp/serviceca); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $f Mar 20 06:51:22 crc kubenswrapper[5136]: ca_file_path="/tmp/serviceca/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ -e "${reg_dir_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: else Mar 20 06:51:22 crc kubenswrapper[5136]: mkdir $reg_dir_path Mar 20 06:51:22 crc kubenswrapper[5136]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: for d in $(ls /etc/docker/certs.d); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $d Mar 20 06:51:22 crc kubenswrapper[5136]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: rm -rf /etc/docker/certs.d/$d Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: sleep 60 & wait ${!} Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbpcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-g5hkc_openshift-image-registry(9076e831-6703-4014-9b7d-eb438a0b62f3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:22 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.056410 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-g5hkc" podUID="9076e831-6703-4014-9b7d-eb438a0b62f3" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112437 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112491 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112543 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.112566 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.215840 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.215917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.215936 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.216316 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.216523 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320123 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320170 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320188 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320211 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.320301 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.396190 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.396226 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.396196 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.396390 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.396475 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.396609 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423036 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423101 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423119 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423142 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.423160 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.525849 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.526694 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.527041 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.527262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.527445 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630244 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630267 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630297 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.630318 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733325 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733378 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733390 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733408 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.733419 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836630 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836642 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.836675 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.844696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g5hkc" event={"ID":"9076e831-6703-4014-9b7d-eb438a0b62f3","Type":"ContainerStarted","Data":"037090a33bb13d8e1de8ae6b0921acca55263501456e7c44289924e71f635278"} Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.846571 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:51:22 crc kubenswrapper[5136]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Mar 20 06:51:22 crc kubenswrapper[5136]: while [ true ]; Mar 20 06:51:22 crc kubenswrapper[5136]: do Mar 20 06:51:22 crc kubenswrapper[5136]: for f in $(ls /tmp/serviceca); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $f Mar 20 06:51:22 crc kubenswrapper[5136]: ca_file_path="/tmp/serviceca/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_dir_path="/etc/docker/certs.d/${f}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ -e "${reg_dir_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: cp -u $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: else Mar 20 06:51:22 crc kubenswrapper[5136]: mkdir $reg_dir_path Mar 20 06:51:22 crc kubenswrapper[5136]: cp $ca_file_path $reg_dir_path/ca.crt Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: for d in $(ls /etc/docker/certs.d); do Mar 20 06:51:22 crc kubenswrapper[5136]: echo $d Mar 20 06:51:22 crc kubenswrapper[5136]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Mar 20 06:51:22 crc kubenswrapper[5136]: reg_conf_path="/tmp/serviceca/${dp}" Mar 20 06:51:22 crc kubenswrapper[5136]: if [ ! -e "${reg_conf_path}" ]; then Mar 20 06:51:22 crc kubenswrapper[5136]: rm -rf /etc/docker/certs.d/$d Mar 20 06:51:22 crc kubenswrapper[5136]: fi Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: sleep 60 & wait ${!} Mar 20 06:51:22 crc kubenswrapper[5136]: done Mar 20 06:51:22 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hbpcd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-g5hkc_openshift-image-registry(9076e831-6703-4014-9b7d-eb438a0b62f3): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Mar 20 06:51:22 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:51:22 crc kubenswrapper[5136]: E0320 06:51:22.855926 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-g5hkc" podUID="9076e831-6703-4014-9b7d-eb438a0b62f3" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.867914 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.881561 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.906783 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.924720 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938793 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938870 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938888 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938914 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.938931 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:22Z","lastTransitionTime":"2026-03-20T06:51:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.945256 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.965841 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.979897 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:22 crc kubenswrapper[5136]: I0320 06:51:22.994303 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.008025 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.017892 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.031550 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.039419 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041313 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041325 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041345 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.041357 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.047894 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.058314 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143863 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143873 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.143898 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246672 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246698 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246708 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.246716 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349206 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349248 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349281 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.349293 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452208 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452248 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452259 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452275 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.452287 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555380 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555450 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555466 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555488 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.555504 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658657 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658674 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658698 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.658715 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761679 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761705 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.761714 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864161 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864182 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864205 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.864222 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968112 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968182 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968209 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:23 crc kubenswrapper[5136]: I0320 06:51:23.968262 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:23Z","lastTransitionTime":"2026-03-20T06:51:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070728 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070786 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070805 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070865 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.070884 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173549 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.173590 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276184 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276251 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276276 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276305 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.276327 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312638 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312790 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312849 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.312894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.312998 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.312963718 +0000 UTC m=+148.572274909 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.312997 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313051 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313073 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313074 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313089 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313100 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313094 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313113 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313092602 +0000 UTC m=+148.572403793 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313114 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313222 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313191166 +0000 UTC m=+148.572502357 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313285 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313246158 +0000 UTC m=+148.572557399 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.313310 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:56.313300619 +0000 UTC m=+148.572611880 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379628 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379697 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379721 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379749 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.379769 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.395961 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.396033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.396054 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.396169 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.396374 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:24 crc kubenswrapper[5136]: E0320 06:51:24.396697 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.406107 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482567 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482617 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482634 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.482678 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585127 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585167 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585180 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585197 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.585208 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687580 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687616 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687625 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.687647 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789916 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789928 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789956 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.789969 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891901 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891911 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891925 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.891936 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995756 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995851 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995887 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:24 crc kubenswrapper[5136]: I0320 06:51:24.995953 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:24Z","lastTransitionTime":"2026-03-20T06:51:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099391 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099441 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099453 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099472 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.099483 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202715 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202769 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202781 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202800 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.202831 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305730 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305746 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305768 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.305786 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408531 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408585 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408595 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408609 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.408620 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512005 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512072 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512095 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512126 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.512149 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615408 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615470 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615488 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615511 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.615529 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718569 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718638 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718660 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718688 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.718710 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821128 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821167 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821178 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821194 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.821206 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923599 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923637 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923645 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923659 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.923669 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:25Z","lastTransitionTime":"2026-03-20T06:51:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:25 crc kubenswrapper[5136]: I0320 06:51:25.978721 5136 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025461 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025850 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.025862 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056643 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.056715 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.065406 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068389 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068418 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068427 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068440 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.068448 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.076633 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079835 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079868 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079876 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079888 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.079896 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.089910 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093627 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093668 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093682 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093704 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.093720 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.102197 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104558 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104604 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104616 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104632 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.104644 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.113091 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:26Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.113241 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128705 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128746 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128762 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128784 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.128798 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231243 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231298 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231326 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.231335 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334330 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334366 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334376 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334392 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.334405 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.396077 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.396165 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.396110 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.396272 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.396359 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:26 crc kubenswrapper[5136]: E0320 06:51:26.396498 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440587 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440639 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440667 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440690 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.440710 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543763 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543845 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543866 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.543880 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646597 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646650 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646683 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.646702 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749071 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749100 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749108 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749120 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.749129 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851283 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851337 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851349 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851365 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.851376 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954261 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954296 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954304 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954317 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:26 crc kubenswrapper[5136]: I0320 06:51:26.954327 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:26Z","lastTransitionTime":"2026-03-20T06:51:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056524 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056581 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056594 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056613 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.056628 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158738 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158807 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158858 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158880 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.158897 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.262932 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263009 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263027 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263048 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.263064 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366922 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366941 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366968 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.366987 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470539 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470795 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470804 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470840 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.470850 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.487902 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt"] Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.488374 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.491058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.491190 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.504744 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.529592 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.540240 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545172 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545402 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.545471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v92zk\" (UniqueName: \"kubernetes.io/projected/36fc020e-a22e-4bde-90c1-4e52cdefde58-kube-api-access-v92zk\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.549715 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.557008 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.566471 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.572956 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.572979 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.572987 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.573001 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.573012 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.575419 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.587011 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.602970 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.613595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.626682 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.636928 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.644873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646915 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646949 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.646970 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v92zk\" (UniqueName: \"kubernetes.io/projected/36fc020e-a22e-4bde-90c1-4e52cdefde58-kube-api-access-v92zk\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.647720 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.647912 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/36fc020e-a22e-4bde-90c1-4e52cdefde58-env-overrides\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.651317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/36fc020e-a22e-4bde-90c1-4e52cdefde58-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.656292 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.664907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v92zk\" (UniqueName: \"kubernetes.io/projected/36fc020e-a22e-4bde-90c1-4e52cdefde58-kube-api-access-v92zk\") pod \"ovnkube-control-plane-749d76644c-pqsdt\" (UID: \"36fc020e-a22e-4bde-90c1-4e52cdefde58\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675680 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675908 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675934 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.675952 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.684354 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779401 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779436 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779447 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779464 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.779475 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.806950 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" Mar 20 06:51:27 crc kubenswrapper[5136]: W0320 06:51:27.825371 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36fc020e_a22e_4bde_90c1_4e52cdefde58.slice/crio-148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8 WatchSource:0}: Error finding container 148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8: Status 404 returned error can't find the container with id 148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8 Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.860652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" event={"ID":"36fc020e-a22e-4bde-90c1-4e52cdefde58","Type":"ContainerStarted","Data":"148d70d7e18cf66c3d8907f976b18c3a3bf279100b85c3a32493b9a8a499f8f8"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.863057 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f" exitCode=0 Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.863115 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.874680 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882292 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882469 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882568 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882600 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.882620 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.885609 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.894063 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.908495 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.923468 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.933920 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.945782 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.956674 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.967954 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.981652 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986193 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986231 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986243 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986287 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.986303 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:27Z","lastTransitionTime":"2026-03-20T06:51:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:27 crc kubenswrapper[5136]: I0320 06:51:27.993298 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.001877 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.009858 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.022999 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.031060 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.040481 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088616 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088663 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088675 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088691 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.088703 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:28Z","lastTransitionTime":"2026-03-20T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191724 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191753 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191762 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191931 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.191949 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:28Z","lastTransitionTime":"2026-03-20T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.193954 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-jz6hg"] Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.194415 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.194557 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.208297 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.219716 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.235075 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.247125 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.254890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58tdm\" (UniqueName: \"kubernetes.io/projected/b5572feb-df7d-4f3a-9b83-3be3de943668-kube-api-access-58tdm\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.254947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.256887 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.268176 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.280697 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.293707 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296778 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296806 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296836 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296850 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.296859 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:28Z","lastTransitionTime":"2026-03-20T06:51:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.301758 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.309930 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.317739 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.323476 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.333420 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.343291 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.354355 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.355741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58tdm\" (UniqueName: \"kubernetes.io/projected/b5572feb-df7d-4f3a-9b83-3be3de943668-kube-api-access-58tdm\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.355925 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.356097 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.356164 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:28.856148291 +0000 UTC m=+121.115459442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.364150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.370672 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58tdm\" (UniqueName: \"kubernetes.io/projected/b5572feb-df7d-4f3a-9b83-3be3de943668-kube-api-access-58tdm\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.372865 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.396209 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.396321 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.396643 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.396725 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.396799 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.396912 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.397063 5136 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.407925 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.417620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.433083 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.443950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.456419 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.466710 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.474108 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.487505 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.490931 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.497646 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.511973 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.519691 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.527484 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.535561 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.545433 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.554084 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.560871 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.569107 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.859245 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.859447 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: E0320 06:51:28.859526 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:29.859508242 +0000 UTC m=+122.118819393 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.872310 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" event={"ID":"36fc020e-a22e-4bde-90c1-4e52cdefde58","Type":"ContainerStarted","Data":"e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.872361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" event={"ID":"36fc020e-a22e-4bde-90c1-4e52cdefde58","Type":"ContainerStarted","Data":"d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.874198 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.876670 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80" exitCode=0 Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.876737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80"} Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.884354 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.894915 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.904177 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.917252 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.927303 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.939111 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.951318 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.964084 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.979588 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:28 crc kubenswrapper[5136]: I0320 06:51:28.991902 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.005150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.015968 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.024577 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.034133 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.043801 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.052764 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.062958 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.073159 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.083729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.092276 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.108458 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.119742 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.139101 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.180643 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.216850 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.255235 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.300752 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.334440 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.373943 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.396173 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:29 crc kubenswrapper[5136]: E0320 06:51:29.396631 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.414949 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.457367 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.497458 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.537920 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.578652 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.870716 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:29 crc kubenswrapper[5136]: E0320 06:51:29.870959 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:29 crc kubenswrapper[5136]: E0320 06:51:29.871099 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:31.871071179 +0000 UTC m=+124.130382350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.882159 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d" exitCode=0 Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.882225 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d"} Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.884748 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3"} Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.884804 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9"} Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.899930 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.922537 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.935503 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.947738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.961650 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.971604 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.986441 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:29 crc kubenswrapper[5136]: I0320 06:51:29.998132 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.022678 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.033209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.045723 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.063310 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.095515 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.142320 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.180158 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.215892 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.256583 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.302461 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.334740 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.378790 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.396711 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.396718 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:30 crc kubenswrapper[5136]: E0320 06:51:30.396973 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.397005 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:30 crc kubenswrapper[5136]: E0320 06:51:30.397217 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:30 crc kubenswrapper[5136]: E0320 06:51:30.397660 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.415436 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.456601 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.499899 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.534005 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.576480 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.616294 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.653718 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.696644 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.733557 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.776873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.815398 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.854942 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.891769 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d" exitCode=0 Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.891857 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d"} Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.909025 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.938485 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:30 crc kubenswrapper[5136]: I0320 06:51:30.975265 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.018450 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.059255 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.093774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.144251 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.174896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.218606 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.260173 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.299984 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.343347 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.378070 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.395647 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:31 crc kubenswrapper[5136]: E0320 06:51:31.395838 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.422431 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.456162 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.495762 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.536140 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.581246 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.619718 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.893644 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:31 crc kubenswrapper[5136]: E0320 06:51:31.893779 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:31 crc kubenswrapper[5136]: E0320 06:51:31.893859 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:35.8938418 +0000 UTC m=+128.153152951 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.897985 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-pt4jb" event={"ID":"4a27959f-3f41-4683-87d6-7b2a9210d634","Type":"ContainerStarted","Data":"ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8"} Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.902334 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f" exitCode=0 Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.902384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f"} Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.904211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07"} Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.915104 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.925951 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.937326 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.946975 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.954637 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.965595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.978080 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:31 crc kubenswrapper[5136]: I0320 06:51:31.991873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.002884 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.016326 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.056729 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.096391 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.160891 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.184236 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.220702 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.256913 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.299620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.344512 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.382570 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.396316 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.396365 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.396441 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:32 crc kubenswrapper[5136]: E0320 06:51:32.396440 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:32 crc kubenswrapper[5136]: E0320 06:51:32.396542 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:32 crc kubenswrapper[5136]: E0320 06:51:32.396609 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.421693 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.465483 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.496106 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.538333 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.581500 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.618384 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.665873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.702238 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.736491 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.777257 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.819966 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.857228 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.901596 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.909844 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" exitCode=0 Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.909936 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.915486 5136 generic.go:334] "Generic (PLEG): container finished" podID="059eafe0-4e83-486d-b958-992b00aa0878" containerID="a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66" exitCode=0 Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.915709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerDied","Data":"a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66"} Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.935692 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:32 crc kubenswrapper[5136]: I0320 06:51:32.980584 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.018387 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.057165 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.095843 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.159898 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.175657 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.215161 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.255192 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.297552 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.340915 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.377689 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.395900 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:33 crc kubenswrapper[5136]: E0320 06:51:33.396030 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.416047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.464429 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: E0320 06:51:33.492030 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.496513 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.538778 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.576037 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.617871 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.658148 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.920770 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.925865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" event={"ID":"059eafe0-4e83-486d-b958-992b00aa0878","Type":"ContainerStarted","Data":"dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.927457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.927514 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931676 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931694 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.931703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.938267 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.953869 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.964159 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.980866 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.992152 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:33 crc kubenswrapper[5136]: I0320 06:51:33.999614 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.008907 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.015468 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.024283 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.056873 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.096047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.135350 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.177495 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.218167 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.257674 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.302633 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.337276 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.379846 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.396579 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.396844 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:34 crc kubenswrapper[5136]: E0320 06:51:34.396844 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.396873 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:34 crc kubenswrapper[5136]: E0320 06:51:34.396910 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:34 crc kubenswrapper[5136]: E0320 06:51:34.396987 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.397527 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.426223 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.458199 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.500511 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.539950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.581073 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.618437 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.660141 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.699215 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.739767 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.776379 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.829166 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.859883 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.901168 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.937652 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.939855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477"} Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.940357 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.941852 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:34 crc kubenswrapper[5136]: I0320 06:51:34.982114 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.026485 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.058232 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.099852 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.138399 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.181646 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.217871 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.258969 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.300479 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.337712 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.381782 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.396555 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:35 crc kubenswrapper[5136]: E0320 06:51:35.396627 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.430222 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.460667 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.497964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.540535 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.583112 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.640668 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.676918 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.700547 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.937063 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:35 crc kubenswrapper[5136]: E0320 06:51:35.937433 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:35 crc kubenswrapper[5136]: E0320 06:51:35.937594 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:51:43.937573472 +0000 UTC m=+136.196884633 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.944542 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-g5hkc" event={"ID":"9076e831-6703-4014-9b7d-eb438a0b62f3","Type":"ContainerStarted","Data":"766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f"} Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.948482 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.960235 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.970664 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:35 crc kubenswrapper[5136]: I0320 06:51:35.985580 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:35Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.002404 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.015263 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.024770 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.039201 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.053529 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.067618 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.099185 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.144635 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.185656 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.219725 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.257198 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.301448 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.364998 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.382229 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.396615 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.396609 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.396847 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.396637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.396974 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.397225 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482143 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482190 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482202 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482222 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.482235 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.496060 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504503 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504546 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504559 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.504627 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.516103 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519589 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519621 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519635 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519654 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.519666 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.530285 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533473 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533505 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533528 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.533539 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.543665 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547601 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547612 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547630 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:36 crc kubenswrapper[5136]: I0320 06:51:36.547641 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:36Z","lastTransitionTime":"2026-03-20T06:51:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.558661 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:36Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:36Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:36 crc kubenswrapper[5136]: E0320 06:51:36.558772 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.396399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:37 crc kubenswrapper[5136]: E0320 06:51:37.396557 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.961027 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf"} Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.961357 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.961371 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.977109 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:37Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:37 crc kubenswrapper[5136]: I0320 06:51:37.990493 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.004150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.017051 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.031033 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.043343 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.056493 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.072521 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.087704 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.102017 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.111357 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.120750 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.133510 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.141679 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.153885 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.163326 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.174117 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.190047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.203746 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.212209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.222636 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.233299 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.243220 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.253039 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.260941 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.270104 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.282402 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.293726 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.305162 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.320254 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.330035 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.341198 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.352769 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.363428 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.382633 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.395851 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.395901 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.395959 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.396027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.396059 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.396204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.408998 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.418417 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.430445 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.440694 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.454319 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.464941 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.474686 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.486622 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: E0320 06:51:38.492411 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.505085 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.516075 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.532245 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.543525 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.556015 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.572647 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.586506 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.604730 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.615407 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.965273 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.985225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:38 crc kubenswrapper[5136]: I0320 06:51:38.997130 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.005149 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.014122 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.024001 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.034539 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.042678 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.056879 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.065208 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.081487 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.094981 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.105351 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.123003 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.134791 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.143885 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.153950 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.164879 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.177431 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:39Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:39 crc kubenswrapper[5136]: I0320 06:51:39.395994 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:39 crc kubenswrapper[5136]: E0320 06:51:39.396120 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.397032 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:40 crc kubenswrapper[5136]: E0320 06:51:40.397163 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.397040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.397218 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:40 crc kubenswrapper[5136]: E0320 06:51:40.397256 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:40 crc kubenswrapper[5136]: E0320 06:51:40.397364 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.973136 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/0.log" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.977011 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf" exitCode=1 Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.977070 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf"} Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.977765 5136 scope.go:117] "RemoveContainer" containerID="8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf" Mar 20 06:51:40 crc kubenswrapper[5136]: I0320 06:51:40.995973 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:40Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.017171 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:40Z\\\",\\\"message\\\":\\\"0493 7163 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 06:51:40.240539 7163 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240619 7163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240685 7163 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240802 7163 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:40.241011 7163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.241512 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:51:40.241538 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:51:40.241560 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 06:51:40.241564 7163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:51:40.241585 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:51:40.241607 7163 factory.go:656] Stopping watch factory\\\\nI0320 06:51:40.241622 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0320 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.030357 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.043848 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.054776 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.068047 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.083307 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.101147 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.118625 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.131146 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.144628 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.161176 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.171536 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.186444 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.195358 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.206048 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.216232 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.396520 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:41 crc kubenswrapper[5136]: E0320 06:51:41.396677 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.982044 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.982552 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/0.log" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.984714 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" exitCode=1 Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.984765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627"} Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.984808 5136 scope.go:117] "RemoveContainer" containerID="8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf" Mar 20 06:51:41 crc kubenswrapper[5136]: I0320 06:51:41.985617 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:51:41 crc kubenswrapper[5136]: E0320 06:51:41.985786 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.001917 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:41Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.011533 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.022418 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.035056 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.047998 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.061197 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.072447 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.082536 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.093321 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.104800 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.114718 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.131457 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.141648 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.157589 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.169414 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.181645 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.199794 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8528edc8f354a97d847207fdb6b6aadd9c6b6accf0023e48540253f15ec55fbf\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:40Z\\\",\\\"message\\\":\\\"0493 7163 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0320 06:51:40.240539 7163 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240619 7163 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240685 7163 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.240802 7163 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:40.241011 7163 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:40.241512 7163 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:51:40.241538 7163 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:51:40.241560 7163 handler.go:208] Removed *v1.Node event handler 2\\\\nI0320 06:51:40.241564 7163 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:51:40.241585 7163 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:51:40.241607 7163 factory.go:656] Stopping watch factory\\\\nI0320 06:51:40.241622 7163 ovnkube.go:599] Stopped ovnkube\\\\nI0320 0\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:42Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.396443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.396513 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.396467 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.396671 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.396777 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.396903 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.991360 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:51:42 crc kubenswrapper[5136]: I0320 06:51:42.996753 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:51:42 crc kubenswrapper[5136]: E0320 06:51:42.997280 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.017868 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.033298 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.053175 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.071767 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.093110 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.158049 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.172366 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.186494 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.199432 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.211661 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.225711 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.238779 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.252067 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.263158 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.273179 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.284720 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.295418 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:43Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:43 crc kubenswrapper[5136]: I0320 06:51:43.395973 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:43 crc kubenswrapper[5136]: E0320 06:51:43.396126 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:43 crc kubenswrapper[5136]: E0320 06:51:43.494405 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.031040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.031250 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.031551 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:52:00.031523997 +0000 UTC m=+152.290835188 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.396229 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.396406 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.396235 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.396512 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:44 crc kubenswrapper[5136]: I0320 06:51:44.396526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:44 crc kubenswrapper[5136]: E0320 06:51:44.396756 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:45 crc kubenswrapper[5136]: I0320 06:51:45.396540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:45 crc kubenswrapper[5136]: E0320 06:51:45.396725 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.166704 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.167524 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.167657 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.396281 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.396631 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.396450 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.397204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.396293 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.397376 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939551 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939562 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939578 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.939589 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:46Z","lastTransitionTime":"2026-03-20T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.959964 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:46Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968177 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968237 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968255 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968279 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.968297 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:46Z","lastTransitionTime":"2026-03-20T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:46 crc kubenswrapper[5136]: E0320 06:51:46.983254 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:46Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987336 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987359 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987385 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:46 crc kubenswrapper[5136]: I0320 06:51:46.987406 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:46Z","lastTransitionTime":"2026-03-20T06:51:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.003040 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:46Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007202 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007254 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007272 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007295 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.007313 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:47Z","lastTransitionTime":"2026-03-20T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.027155 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032139 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032148 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032161 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.032171 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:47Z","lastTransitionTime":"2026-03-20T06:51:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.050572 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:47Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.050753 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:47 crc kubenswrapper[5136]: I0320 06:51:47.396189 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:47 crc kubenswrapper[5136]: E0320 06:51:47.396369 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.395752 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.395768 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.395982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.396029 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.396120 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.396204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.418527 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.433997 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.449738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.465893 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.483682 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: E0320 06:51:48.495876 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.505252 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.519195 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.534443 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.547962 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.562089 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.577138 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.599374 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.609935 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.622101 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.633491 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.645566 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:48 crc kubenswrapper[5136]: I0320 06:51:48.656969 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:49 crc kubenswrapper[5136]: I0320 06:51:49.396206 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:49 crc kubenswrapper[5136]: E0320 06:51:49.396394 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:50 crc kubenswrapper[5136]: I0320 06:51:50.396279 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:50 crc kubenswrapper[5136]: I0320 06:51:50.396313 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:50 crc kubenswrapper[5136]: E0320 06:51:50.396473 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:50 crc kubenswrapper[5136]: I0320 06:51:50.396615 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:50 crc kubenswrapper[5136]: E0320 06:51:50.396720 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:50 crc kubenswrapper[5136]: E0320 06:51:50.396995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:51 crc kubenswrapper[5136]: I0320 06:51:51.396342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:51 crc kubenswrapper[5136]: E0320 06:51:51.396492 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:52 crc kubenswrapper[5136]: I0320 06:51:52.396035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:52 crc kubenswrapper[5136]: I0320 06:51:52.396035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:52 crc kubenswrapper[5136]: E0320 06:51:52.396208 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:52 crc kubenswrapper[5136]: E0320 06:51:52.396349 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:52 crc kubenswrapper[5136]: I0320 06:51:52.396040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:52 crc kubenswrapper[5136]: E0320 06:51:52.396582 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.395594 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:53 crc kubenswrapper[5136]: E0320 06:51:53.395773 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.418577 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Mar 20 06:51:53 crc kubenswrapper[5136]: E0320 06:51:53.497460 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.922223 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.942759 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.958658 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.977516 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:53 crc kubenswrapper[5136]: I0320 06:51:53.998705 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:53Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.011259 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.023038 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.042715 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.055911 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.068503 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.081699 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.094890 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.113463 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.130612 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.145995 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.158223 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.169712 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.179908 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.193014 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:54Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.396387 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.396396 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:54 crc kubenswrapper[5136]: E0320 06:51:54.396584 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:54 crc kubenswrapper[5136]: E0320 06:51:54.396752 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:54 crc kubenswrapper[5136]: I0320 06:51:54.396418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:54 crc kubenswrapper[5136]: E0320 06:51:54.397101 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:55 crc kubenswrapper[5136]: I0320 06:51:55.395648 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:55 crc kubenswrapper[5136]: E0320 06:51:55.395780 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366602 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366710 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.366800 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366893 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.366839668 +0000 UTC m=+212.626150869 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366925 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366948 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366969 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366974 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367021 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.366995552 +0000 UTC m=+212.626306713 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.366982 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367043 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.367033403 +0000 UTC m=+212.626344564 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367064 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.367053534 +0000 UTC m=+212.626364785 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367074 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367105 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367126 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.367200 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:00.367180708 +0000 UTC m=+212.626491899 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.396664 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.396776 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.396898 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:56 crc kubenswrapper[5136]: I0320 06:51:56.396935 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.397055 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:56 crc kubenswrapper[5136]: E0320 06:51:56.397197 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274191 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274282 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274294 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274309 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.274320 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.303301 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308711 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308766 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308785 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308808 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.308851 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.329496 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334006 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334059 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334077 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334098 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.334114 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.351924 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356584 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356636 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356654 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356677 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.356694 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.372630 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377343 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377400 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377416 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377439 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.377457 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:51:57Z","lastTransitionTime":"2026-03-20T06:51:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:51:57 crc kubenswrapper[5136]: I0320 06:51:57.396111 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.396336 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.397190 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:57Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:57 crc kubenswrapper[5136]: E0320 06:51:57.397407 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.396231 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.396231 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.396475 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.396403 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.396536 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.396625 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.411359 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.428708 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.446494 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.472370 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.488706 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: E0320 06:51:58.498157 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.505906 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.524222 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.536124 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.547309 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.559282 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.573293 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.584199 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.597713 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.610899 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.622284 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.631220 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.646176 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:58 crc kubenswrapper[5136]: I0320 06:51:58.656031 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:51:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:51:59 crc kubenswrapper[5136]: I0320 06:51:59.396369 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:51:59 crc kubenswrapper[5136]: E0320 06:51:59.396552 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.108540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.108713 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.108948 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:52:32.10893283 +0000 UTC m=+184.368243981 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.396616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.396701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.396806 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.397481 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.397616 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:00 crc kubenswrapper[5136]: E0320 06:52:00.398005 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:00 crc kubenswrapper[5136]: I0320 06:52:00.398174 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.074614 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.078510 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00"} Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.078807 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.097154 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.116642 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.132763 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.148668 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.160748 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.177054 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.191510 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.202304 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.215197 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.231302 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.241858 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.250469 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.263964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.274381 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.284362 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.297646 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.309149 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.333456 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:01Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:01 crc kubenswrapper[5136]: I0320 06:52:01.396106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:01 crc kubenswrapper[5136]: E0320 06:52:01.396235 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.083910 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.084689 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/1.log" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.087002 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" exitCode=1 Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.087036 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00"} Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.087069 5136 scope.go:117] "RemoveContainer" containerID="0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.088166 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.088445 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.105359 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.117058 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.128595 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.143638 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.153894 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.164537 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.175364 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.186017 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.201772 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbe00ca047c640ff7c3a35cde78e239bda74c66031a4b55ba8beaf803b2f627\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:51:41Z\\\",\\\"message\\\":\\\"ers/externalversions/factory.go:140\\\\nI0320 06:51:41.822309 7278 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822355 7278 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:51:41.822501 7278 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823026 7278 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0320 06:51:41.823071 7278 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823129 7278 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823169 7278 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:51:41.823370 7278 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:41Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.214341 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.226291 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.235688 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.246206 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.257521 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.269622 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.280799 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.288720 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.304509 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:02Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.396321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.396345 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:02 crc kubenswrapper[5136]: I0320 06:52:02.396369 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.396452 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.396601 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:02 crc kubenswrapper[5136]: E0320 06:52:02.396639 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.093986 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.099992 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:03 crc kubenswrapper[5136]: E0320 06:52:03.100239 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.121119 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.134157 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.149327 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.165106 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.181134 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.199602 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.218406 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.230002 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.242697 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.257237 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.270555 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.281736 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.296841 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.311730 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.328237 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.345561 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.358270 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.377308 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:03Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:03 crc kubenswrapper[5136]: I0320 06:52:03.396526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:03 crc kubenswrapper[5136]: E0320 06:52:03.396647 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:03 crc kubenswrapper[5136]: E0320 06:52:03.499777 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:04 crc kubenswrapper[5136]: I0320 06:52:04.395959 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:04 crc kubenswrapper[5136]: I0320 06:52:04.396076 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:04 crc kubenswrapper[5136]: E0320 06:52:04.396110 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:04 crc kubenswrapper[5136]: I0320 06:52:04.396183 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:04 crc kubenswrapper[5136]: E0320 06:52:04.396302 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:04 crc kubenswrapper[5136]: E0320 06:52:04.396437 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:05 crc kubenswrapper[5136]: I0320 06:52:05.396162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:05 crc kubenswrapper[5136]: E0320 06:52:05.396329 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:05 crc kubenswrapper[5136]: I0320 06:52:05.416395 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Mar 20 06:52:06 crc kubenswrapper[5136]: I0320 06:52:06.396299 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:06 crc kubenswrapper[5136]: I0320 06:52:06.396399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:06 crc kubenswrapper[5136]: E0320 06:52:06.396484 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:06 crc kubenswrapper[5136]: I0320 06:52:06.396508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:06 crc kubenswrapper[5136]: E0320 06:52:06.396686 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:06 crc kubenswrapper[5136]: E0320 06:52:06.396882 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.396437 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.396635 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478262 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478331 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478349 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478373 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.478390 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.498906 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505775 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505844 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505857 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505875 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.505888 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.522425 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526372 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526415 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526432 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526456 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.526471 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.544554 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548428 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548486 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548509 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548538 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.548558 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.566427 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570700 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570735 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570747 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570764 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:07 crc kubenswrapper[5136]: I0320 06:52:07.570776 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:07Z","lastTransitionTime":"2026-03-20T06:52:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.588020 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:07Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:07Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:07 crc kubenswrapper[5136]: E0320 06:52:07.588125 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.396383 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.396473 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.396473 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.396591 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.396888 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.397044 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.412146 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.427308 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.457590 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.474079 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.484896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: E0320 06:52:08.500393 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.501462 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.518130 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.532294 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.546615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.558500 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.579069 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.594068 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.608035 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.619080 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.629209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.641481 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.658464 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.671185 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:08 crc kubenswrapper[5136]: I0320 06:52:08.687894 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:08Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:09 crc kubenswrapper[5136]: I0320 06:52:09.396663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:09 crc kubenswrapper[5136]: E0320 06:52:09.396847 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:10 crc kubenswrapper[5136]: I0320 06:52:10.396375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:10 crc kubenswrapper[5136]: I0320 06:52:10.396418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:10 crc kubenswrapper[5136]: E0320 06:52:10.396585 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:10 crc kubenswrapper[5136]: I0320 06:52:10.396677 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:10 crc kubenswrapper[5136]: E0320 06:52:10.396848 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:10 crc kubenswrapper[5136]: E0320 06:52:10.397034 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:11 crc kubenswrapper[5136]: I0320 06:52:11.396122 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:11 crc kubenswrapper[5136]: E0320 06:52:11.396267 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:12 crc kubenswrapper[5136]: I0320 06:52:12.396328 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:12 crc kubenswrapper[5136]: I0320 06:52:12.396358 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:12 crc kubenswrapper[5136]: E0320 06:52:12.397156 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:12 crc kubenswrapper[5136]: I0320 06:52:12.396422 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:12 crc kubenswrapper[5136]: E0320 06:52:12.397280 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:12 crc kubenswrapper[5136]: E0320 06:52:12.397518 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:13 crc kubenswrapper[5136]: I0320 06:52:13.396529 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:13 crc kubenswrapper[5136]: E0320 06:52:13.396795 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:13 crc kubenswrapper[5136]: E0320 06:52:13.501350 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:14 crc kubenswrapper[5136]: I0320 06:52:14.396469 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:14 crc kubenswrapper[5136]: I0320 06:52:14.396552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:14 crc kubenswrapper[5136]: E0320 06:52:14.396679 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:14 crc kubenswrapper[5136]: I0320 06:52:14.396721 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:14 crc kubenswrapper[5136]: E0320 06:52:14.396930 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:14 crc kubenswrapper[5136]: E0320 06:52:14.397038 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.164880 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/0.log" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.164935 5136 generic.go:334] "Generic (PLEG): container finished" podID="263c5427-a835-40c6-93cb-4bb66a83ea5b" containerID="84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68" exitCode=1 Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.164967 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerDied","Data":"84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68"} Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.165336 5136 scope.go:117] "RemoveContainer" containerID="84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.181589 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.207475 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.221349 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.235691 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.248113 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.263139 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.277738 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.296954 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:15Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.309505 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.324742 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.335231 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.347305 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.378616 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.396603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:15 crc kubenswrapper[5136]: E0320 06:52:15.396711 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.397755 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.411605 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.428175 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.442269 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.457763 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:15 crc kubenswrapper[5136]: I0320 06:52:15.471985 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:15Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.172435 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/0.log" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.172515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644"} Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.192509 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.216136 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.233550 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.252473 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.269328 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.288488 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.315922 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.333478 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.350896 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.366549 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.382418 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.396442 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.396511 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.396550 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:16 crc kubenswrapper[5136]: E0320 06:52:16.396643 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:16 crc kubenswrapper[5136]: E0320 06:52:16.396855 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:16 crc kubenswrapper[5136]: E0320 06:52:16.396883 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.399982 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.418129 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.437878 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.457174 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.472145 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.487432 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.517373 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:16 crc kubenswrapper[5136]: I0320 06:52:16.535033 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:16Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.396269 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.396682 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.396941 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.397100 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.771917 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772014 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772032 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772103 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.772145 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.790796 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.795618 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.795989 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.796179 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.796344 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.796542 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.814435 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.819673 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.819915 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.820078 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.820231 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.820373 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.840229 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844009 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844047 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844058 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844078 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.844092 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.858566 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862207 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862227 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862237 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862250 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:17 crc kubenswrapper[5136]: I0320 06:52:17.862261 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:17Z","lastTransitionTime":"2026-03-20T06:52:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.875719 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:17Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:17 crc kubenswrapper[5136]: E0320 06:52:17.875894 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.396575 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.396673 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.396806 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.396903 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.397071 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.397250 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.414103 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.434854 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.452433 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.472657 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.488966 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: E0320 06:52:18.502210 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.509889 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.527190 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.543002 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.564216 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.580971 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.595539 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.620498 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.635945 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.650643 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.664307 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.675582 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.696959 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.712170 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:18 crc kubenswrapper[5136]: I0320 06:52:18.724704 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:18Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:19 crc kubenswrapper[5136]: I0320 06:52:19.396130 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:19 crc kubenswrapper[5136]: E0320 06:52:19.396600 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:20 crc kubenswrapper[5136]: I0320 06:52:20.396609 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:20 crc kubenswrapper[5136]: I0320 06:52:20.396698 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:20 crc kubenswrapper[5136]: I0320 06:52:20.396609 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:20 crc kubenswrapper[5136]: E0320 06:52:20.396878 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:20 crc kubenswrapper[5136]: E0320 06:52:20.396995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:20 crc kubenswrapper[5136]: E0320 06:52:20.397156 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:21 crc kubenswrapper[5136]: I0320 06:52:21.396448 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:21 crc kubenswrapper[5136]: E0320 06:52:21.396650 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:22 crc kubenswrapper[5136]: I0320 06:52:22.396614 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:22 crc kubenswrapper[5136]: I0320 06:52:22.396657 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:22 crc kubenswrapper[5136]: E0320 06:52:22.396717 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:22 crc kubenswrapper[5136]: I0320 06:52:22.396782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:22 crc kubenswrapper[5136]: E0320 06:52:22.396993 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:22 crc kubenswrapper[5136]: E0320 06:52:22.397001 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:23 crc kubenswrapper[5136]: I0320 06:52:23.396130 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:23 crc kubenswrapper[5136]: E0320 06:52:23.396333 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:23 crc kubenswrapper[5136]: E0320 06:52:23.503794 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:24 crc kubenswrapper[5136]: I0320 06:52:24.396618 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:24 crc kubenswrapper[5136]: I0320 06:52:24.396749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:24 crc kubenswrapper[5136]: E0320 06:52:24.396988 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:24 crc kubenswrapper[5136]: E0320 06:52:24.396765 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:24 crc kubenswrapper[5136]: I0320 06:52:24.397047 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:24 crc kubenswrapper[5136]: E0320 06:52:24.397278 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:25 crc kubenswrapper[5136]: I0320 06:52:25.396054 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:25 crc kubenswrapper[5136]: E0320 06:52:25.396230 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:26 crc kubenswrapper[5136]: I0320 06:52:26.395922 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:26 crc kubenswrapper[5136]: I0320 06:52:26.396012 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:26 crc kubenswrapper[5136]: I0320 06:52:26.395922 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:26 crc kubenswrapper[5136]: E0320 06:52:26.396099 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:26 crc kubenswrapper[5136]: E0320 06:52:26.396316 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:26 crc kubenswrapper[5136]: E0320 06:52:26.396369 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:27 crc kubenswrapper[5136]: I0320 06:52:27.396146 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:27 crc kubenswrapper[5136]: E0320 06:52:27.396355 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.107970 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108023 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108034 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108050 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.108062 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.128173 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133004 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133064 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133082 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133108 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.133125 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.151696 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.156941 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.156984 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.157003 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.157026 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.157042 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.179025 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184128 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184278 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184301 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184383 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.184409 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.209026 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.213900 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.213973 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.213996 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.214024 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.214049 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:28Z","lastTransitionTime":"2026-03-20T06:52:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.235273 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.235446 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.395705 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.395757 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.395757 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.395951 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.396084 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.396199 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.419564 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.440464 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.455434 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.475292 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.493562 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: E0320 06:52:28.504511 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.509512 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.545435 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.565895 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.577774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.590124 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.605133 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.618158 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.628861 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.643825 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.655602 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.667944 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.683055 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.697611 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:28 crc kubenswrapper[5136]: I0320 06:52:28.720795 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:28Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:29 crc kubenswrapper[5136]: I0320 06:52:29.395709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:29 crc kubenswrapper[5136]: E0320 06:52:29.395957 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:30 crc kubenswrapper[5136]: I0320 06:52:30.395770 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:30 crc kubenswrapper[5136]: I0320 06:52:30.395889 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:30 crc kubenswrapper[5136]: E0320 06:52:30.396088 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:30 crc kubenswrapper[5136]: I0320 06:52:30.396160 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:30 crc kubenswrapper[5136]: E0320 06:52:30.396318 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:30 crc kubenswrapper[5136]: E0320 06:52:30.396382 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:31 crc kubenswrapper[5136]: I0320 06:52:31.395848 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:31 crc kubenswrapper[5136]: E0320 06:52:31.396007 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:31 crc kubenswrapper[5136]: I0320 06:52:31.397099 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.161274 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.161500 5136 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.161643 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs podName:b5572feb-df7d-4f3a-9b83-3be3de943668 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:36.161626658 +0000 UTC m=+248.420937809 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs") pod "network-metrics-daemon-jz6hg" (UID: "b5572feb-df7d-4f3a-9b83-3be3de943668") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.245397 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.247852 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.248202 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.260692 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.274961 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.284847 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.298101 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.308917 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.320421 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.339800 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.351913 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.362174 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.372317 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.382350 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.396215 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.396281 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.396316 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.396427 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.396495 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:32 crc kubenswrapper[5136]: E0320 06:52:32.396600 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.397204 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.406570 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.418587 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.426524 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.438323 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.449685 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.459840 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:32 crc kubenswrapper[5136]: I0320 06:52:32.479555 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:32Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.253507 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.254290 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/2.log" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257141 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" exitCode=1 Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257234 5136 scope.go:117] "RemoveContainer" containerID="06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.257885 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:33 crc kubenswrapper[5136]: E0320 06:52:33.258161 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.280690 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.293806 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.303306 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.315286 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.325883 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.338665 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.366789 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.380055 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.389679 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.396126 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:33 crc kubenswrapper[5136]: E0320 06:52:33.396320 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.401940 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.419144 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.433445 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.445597 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.460929 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.472088 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.483961 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.498010 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: E0320 06:52:33.506535 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.512037 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:33 crc kubenswrapper[5136]: I0320 06:52:33.533620 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://06811e790c3567fadcff364a6d54afeb7aa694efe36e37ade9b2d132a89d7f00\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:01Z\\\",\\\"message\\\":\\\"om k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.180939 7538 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.181240 7538 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0320 06:52:01.181403 7538 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0320 06:52:01.183029 7538 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0320 06:52:01.183086 7538 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0320 06:52:01.183130 7538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0320 06:52:01.183139 7538 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0320 06:52:01.183161 7538 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0320 06:52:01.183174 7538 factory.go:656] Stopping watch factory\\\\nI0320 06:52:01.183185 7538 handler.go:208] Removed *v1.Node event handler 7\\\\nI0320 06:52:01.183196 7538 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0320 06:52:01.183197 7538 ovnkube.go:599] Stopped ovnkube\\\\nI03\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:00Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:33Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.263558 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.269796 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.270128 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.296767 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.322794 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.354968 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.374246 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.386413 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.396255 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.396293 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.396301 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.396375 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.396470 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:34 crc kubenswrapper[5136]: E0320 06:52:34.396549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.397481 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.410376 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.422939 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.441325 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.455239 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.466726 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.485531 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.502269 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.520344 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.541637 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.556147 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.572733 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.590192 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:34 crc kubenswrapper[5136]: I0320 06:52:34.602323 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:34Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:35 crc kubenswrapper[5136]: I0320 06:52:35.396516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:35 crc kubenswrapper[5136]: E0320 06:52:35.396655 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:36 crc kubenswrapper[5136]: I0320 06:52:36.396062 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:36 crc kubenswrapper[5136]: I0320 06:52:36.396153 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:36 crc kubenswrapper[5136]: I0320 06:52:36.396153 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:36 crc kubenswrapper[5136]: E0320 06:52:36.396296 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:36 crc kubenswrapper[5136]: E0320 06:52:36.396383 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:36 crc kubenswrapper[5136]: E0320 06:52:36.396587 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:37 crc kubenswrapper[5136]: I0320 06:52:37.396287 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:37 crc kubenswrapper[5136]: E0320 06:52:37.396491 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278606 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278669 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278693 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278720 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.278741 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.298354 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303513 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303568 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303586 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303610 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.303627 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.322565 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326760 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326849 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326868 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326890 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.326907 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.343125 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.347493 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.347708 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.347916 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.348134 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.348343 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.366793 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.370976 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371018 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371030 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371049 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.371065 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:38Z","lastTransitionTime":"2026-03-20T06:52:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.389566 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.389784 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.395711 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.395900 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.396050 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.396212 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.396395 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.397212 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.416209 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.437979 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.454805 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.472299 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.490173 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: E0320 06:52:38.507246 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.512577 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.548582 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.568431 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.584263 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.599142 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.613774 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.634534 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.647991 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.669482 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.685702 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.696515 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.708601 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.718727 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:38 crc kubenswrapper[5136]: I0320 06:52:38.747557 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:38Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:39 crc kubenswrapper[5136]: I0320 06:52:39.396284 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:39 crc kubenswrapper[5136]: E0320 06:52:39.396747 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:40 crc kubenswrapper[5136]: I0320 06:52:40.395978 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:40 crc kubenswrapper[5136]: I0320 06:52:40.396036 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:40 crc kubenswrapper[5136]: E0320 06:52:40.396111 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:40 crc kubenswrapper[5136]: E0320 06:52:40.396212 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:40 crc kubenswrapper[5136]: I0320 06:52:40.395924 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:40 crc kubenswrapper[5136]: E0320 06:52:40.396456 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:41 crc kubenswrapper[5136]: I0320 06:52:41.395899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:41 crc kubenswrapper[5136]: E0320 06:52:41.396216 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:42 crc kubenswrapper[5136]: I0320 06:52:42.395714 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:42 crc kubenswrapper[5136]: E0320 06:52:42.395934 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:42 crc kubenswrapper[5136]: I0320 06:52:42.396300 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:42 crc kubenswrapper[5136]: E0320 06:52:42.396424 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:42 crc kubenswrapper[5136]: I0320 06:52:42.397527 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:42 crc kubenswrapper[5136]: E0320 06:52:42.397670 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:43 crc kubenswrapper[5136]: I0320 06:52:43.395773 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:43 crc kubenswrapper[5136]: E0320 06:52:43.396006 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:43 crc kubenswrapper[5136]: E0320 06:52:43.509006 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:44 crc kubenswrapper[5136]: I0320 06:52:44.396179 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:44 crc kubenswrapper[5136]: I0320 06:52:44.396260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:44 crc kubenswrapper[5136]: I0320 06:52:44.396192 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:44 crc kubenswrapper[5136]: E0320 06:52:44.396375 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:44 crc kubenswrapper[5136]: E0320 06:52:44.396479 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:44 crc kubenswrapper[5136]: E0320 06:52:44.396528 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:45 crc kubenswrapper[5136]: I0320 06:52:45.395994 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:45 crc kubenswrapper[5136]: E0320 06:52:45.396175 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:45 crc kubenswrapper[5136]: I0320 06:52:45.397151 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:45 crc kubenswrapper[5136]: E0320 06:52:45.397394 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:46 crc kubenswrapper[5136]: I0320 06:52:46.396280 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:46 crc kubenswrapper[5136]: I0320 06:52:46.396321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:46 crc kubenswrapper[5136]: I0320 06:52:46.396399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:46 crc kubenswrapper[5136]: E0320 06:52:46.396521 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:46 crc kubenswrapper[5136]: E0320 06:52:46.396669 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:46 crc kubenswrapper[5136]: E0320 06:52:46.396692 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:47 crc kubenswrapper[5136]: I0320 06:52:47.395705 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:47 crc kubenswrapper[5136]: E0320 06:52:47.395959 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.395724 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.395920 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.396012 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.396108 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.396411 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.396566 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.411150 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50be36b8e64a8322d10690588efc5f18b1e560baa994ffbefc1f097160ebef07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.425485 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.434299 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-pt4jb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4a27959f-3f41-4683-87d6-7b2a9210d634\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ea8d01475d9c34cab4cc186c66e72194cdee9a9eca4ff816062f2e7f3e0e49d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-njj48\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-pt4jb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.447732 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"059eafe0-4e83-486d-b958-992b00aa0878\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3b6ec2546b54bed98187fed5ca0330cf160235d3752b8a3583ac0dd70b9583\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b95d9c4db9a8eac79f4e78d5548aa1a7212ab6f7370941f00ad8ab7ef968872f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09aa32626c6ce566cc2cbf1f90fbfd0b69b7a05af25572bf2ed658693b662f80\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://452b3195f41ed767d2ff38c4e180c25f675f906400ed6274659579b39280d53d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bccf1a5b2d12ca436d62ced825453e4a263133933e5e1f5225ddf7f6eeba492d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://edb04ce031f135c88c0e23e5a287495ffb1a4d11bef74e7d173a1ead3bb19d6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a627e32699c8b7536d3fe8b5db7a21cdea58f230cd0500476ab97a6638e12c66\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wd92w\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-dbsfs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.459619 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b5572feb-df7d-4f3a-9b83-3be3de943668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-58tdm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:28Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-jz6hg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.470975 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.483721 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.495615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.509504 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.512927 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.512969 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.512985 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.513007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.513024 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.516390 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.529228 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.530591 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533175 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533228 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533239 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533258 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.533271 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.546752 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.546964 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.550967 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551007 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551016 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551032 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.551044 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.558708 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.564471 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569062 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569102 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569111 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569126 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.569136 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.570451 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.582942 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://51bce191511e5cfa8a991093278041d7e3c643c0b34b6cea1404f3df101740e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-94xgp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-jst28\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.584358 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587506 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587540 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587549 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587563 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.587574 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:48Z","lastTransitionTime":"2026-03-20T06:52:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.595798 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-tjpps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"263c5427-a835-40c6-93cb-4bb66a83ea5b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:14Z\\\",\\\"message\\\":\\\"2026-03-20T06:51:29+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138\\\\n2026-03-20T06:51:29+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_cf2ac4bb-c4f4-4498-ac06-528cc80de138 to /host/opt/cni/bin/\\\\n2026-03-20T06:51:29Z [verbose] multus-daemon started\\\\n2026-03-20T06:51:29Z [verbose] Readiness Indicator file check\\\\n2026-03-20T06:52:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:52:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dlfxh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-multus\"/\"multus-tjpps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.601107 5136 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"lastTransitionTime\\\":\\\"2026-03-20T06:52:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"35df81f9-549e-4466-8b52-0d5376d2ac8e\\\",\\\"systemUUID\\\":\\\"261f6a19-9ce2-42a3-b473-0fa1ec2ce9f2\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: E0320 06:52:48.601286 5136 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.619929 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"74e7c2eb-78d2-4e11-b100-cd482f320447\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2a1d8bf756502952a8d3a972ac37c6e4f5dbc1c3f9041776d82f44db634ad9a2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82f691f77718c46f2602104ecb33424a7c7443c5b34f9f2322cf5a1f608b1131\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4d76808c1f28f48134ff4f0c3e1a0708c6a2761101662ea1a7b339f52871cbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://485ffe98cdc3606f5ed78e234077affc1018885123a92cb6def4150118c62f15\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af19f920b4713d01034bfa44614deeacb01000a2d289fd3f9fb8ac327285f0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c76a1c0890b155e2a894748a23f2f0decbeb305ae9c3937777615e813dc2f0ab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0b78f461ca953fb7e880c083f24d6e47581eec52140928d4d50e9602c1c85b72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd5f8b9799f55d9850f9b97bdaa3b1969ca350bf0a79aa45bac25f23fdf07a19\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.637615 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4e961f90217a9d56923d86a94384c8a390caa56d4196d5861e821a4cb16beed7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08fe9e7b76158d57d6bf1a16de1a1dc15a6ebe2648c19bc1e613660245aa4d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.647335 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-g5hkc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9076e831-6703-4014-9b7d-eb438a0b62f3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://766f36609e404a0747adbea9f453adda62bf58a9e4992d531e05f09ef5a3469f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hbpcd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:21Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-g5hkc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:48 crc kubenswrapper[5136]: I0320 06:52:48.659564 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"36fc020e-a22e-4bde-90c1-4e52cdefde58\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d85d37b5c3fd4f89c461ee313b9dca6da1e37a793594a2c02c691bb33585e6c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e2d90e58c43746e81e232ae6fa4356e43b37c0eca9938f0592b99306675a2e3d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v92zk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:27Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-pqsdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:48Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:49 crc kubenswrapper[5136]: I0320 06:52:49.396522 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:49 crc kubenswrapper[5136]: E0320 06:52:49.396713 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:50 crc kubenswrapper[5136]: I0320 06:52:50.396688 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:50 crc kubenswrapper[5136]: I0320 06:52:50.397337 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:50 crc kubenswrapper[5136]: E0320 06:52:50.397416 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:50 crc kubenswrapper[5136]: I0320 06:52:50.397552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:50 crc kubenswrapper[5136]: E0320 06:52:50.397732 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:50 crc kubenswrapper[5136]: E0320 06:52:50.397756 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:51 crc kubenswrapper[5136]: I0320 06:52:51.395568 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:51 crc kubenswrapper[5136]: E0320 06:52:51.395741 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:52 crc kubenswrapper[5136]: I0320 06:52:52.396556 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:52 crc kubenswrapper[5136]: I0320 06:52:52.396603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:52 crc kubenswrapper[5136]: E0320 06:52:52.396786 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:52 crc kubenswrapper[5136]: I0320 06:52:52.396807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:52 crc kubenswrapper[5136]: E0320 06:52:52.396994 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:52 crc kubenswrapper[5136]: E0320 06:52:52.397096 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:53 crc kubenswrapper[5136]: I0320 06:52:53.396221 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:53 crc kubenswrapper[5136]: E0320 06:52:53.396419 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:53 crc kubenswrapper[5136]: E0320 06:52:53.511396 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:54 crc kubenswrapper[5136]: I0320 06:52:54.396092 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:54 crc kubenswrapper[5136]: I0320 06:52:54.396256 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:54 crc kubenswrapper[5136]: I0320 06:52:54.396467 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:54 crc kubenswrapper[5136]: E0320 06:52:54.396721 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:54 crc kubenswrapper[5136]: E0320 06:52:54.396953 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:54 crc kubenswrapper[5136]: E0320 06:52:54.397158 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:55 crc kubenswrapper[5136]: I0320 06:52:55.396047 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:55 crc kubenswrapper[5136]: E0320 06:52:55.396200 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:56 crc kubenswrapper[5136]: I0320 06:52:56.396699 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:56 crc kubenswrapper[5136]: E0320 06:52:56.396854 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:56 crc kubenswrapper[5136]: I0320 06:52:56.396925 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:56 crc kubenswrapper[5136]: I0320 06:52:56.396949 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:56 crc kubenswrapper[5136]: E0320 06:52:56.397165 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:56 crc kubenswrapper[5136]: E0320 06:52:56.397690 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:57 crc kubenswrapper[5136]: I0320 06:52:57.396121 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:57 crc kubenswrapper[5136]: E0320 06:52:57.396312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.395772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.395791 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.395955 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.396083 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.396226 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.396414 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.412549 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8bbb2a7f-4bda-4c44-a68a-2c718a9516ab\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a44c06a1a914825b830473bb76e864cf47d19e8cc9fa15af063382809096e0f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0b500d6a25b7cf64c33713949e8dca4742e2858f84d24ff2ac4c9c4893c34a6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://1b1fb649c59aebee40881e6880ca30c3de1295540683f704558d326660a99084\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d967916a86cac36e1416cdddcfe7cfe062b833712098dbf8611b78dc4e8ba636\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.435810 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.448412 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.474122 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"963bf1ca-b871-4cad-a1fc-cf829a70a81a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:15Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-03-20T06:52:32Z\\\",\\\"message\\\":\\\"penshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0320 06:52:32.267553 7856 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-apiserver/api]} name:Service_openshift-apiserver/api_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.37:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {88e20c31-5b8d-4d44-bbd8-dba87b7dbaf0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0320 06:52:32.266351 7856 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:52:31Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:51:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:51:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nrnqr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:51:15Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nbmbh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.489091 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ff2945a8-9e67-4cef-891b-51ab67f94db7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9da254514da50a87a04ac662f94547e77ec900d8b6efca1136ae671b0f4b6ce9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a6d3975b7d167142eac38a4a52abb164b4cc33f2e98f53f70de54c1c965debde\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:25Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0320 06:49:56.095986 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0320 06:49:56.097003 1 observer_polling.go:159] Starting file observer\\\\nI0320 06:49:56.099906 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0320 06:49:56.102527 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0320 06:50:25.704660 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0320 06:50:25.704739 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:50:25Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:56Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:50:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9326d71f178c4960b895b0883814de0504bdcf7b2c60dd922bbf982132df784b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38fefe7b2b931c847206b730f9cda46a663700efed93353fb44a6e70c60ad37a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.509690 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"08f93948-dc0a-4e68-80fa-26429c0c0654\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-03-20T06:50:39Z\\\",\\\"message\\\":\\\"le observer\\\\nW0320 06:50:38.925912 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0320 06:50:38.926056 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0320 06:50:38.926634 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1810778595/tls.crt::/tmp/serving-cert-1810778595/tls.key\\\\\\\"\\\\nI0320 06:50:39.621551 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0320 06:50:39.626059 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0320 06:50:39.626078 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0320 06:50:39.626179 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0320 06:50:39.626187 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0320 06:50:39.632548 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0320 06:50:39.632577 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0320 06:50:39.632584 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0320 06:50:39.632582 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0320 06:50:39.632590 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0320 06:50:39.632596 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0320 06:50:39.632600 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0320 06:50:39.632605 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0320 06:50:39.633850 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-03-20T06:50:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:31Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: E0320 06:52:58.513067 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.525598 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b43e771f-fe0d-41da-bfe1-2aef6ac02dac\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T06:49:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e07ac41ab70553eaee3292fd9e897821f9e7ae98cd554b3311eb52d7855199bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:49:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8a738fd7f7117310a8dc383219858dd0ee88b66958461824030c6fbfeebf91e0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-03-20T06:49:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-03-20T06:49:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T06:49:28Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.540849 5136 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:50:52Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-03-20T06:51:33Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://767ac650a833da5151d45f9cdd647d7f43ffd3b012a79723906e9839b0408bae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-03-20T06:51:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-03-20T06:52:58Z is after 2025-08-24T17:21:41Z" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.580079 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podStartSLOduration=141.580059686 podStartE2EDuration="2m21.580059686s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.563907547 +0000 UTC m=+210.823218698" watchObservedRunningTime="2026-03-20 06:52:58.580059686 +0000 UTC m=+210.839370837" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.603908 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-tjpps" podStartSLOduration=141.603892856 podStartE2EDuration="2m21.603892856s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.58050291 +0000 UTC m=+210.839814061" watchObservedRunningTime="2026-03-20 06:52:58.603892856 +0000 UTC m=+210.863203997" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.604020 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=53.60401636 podStartE2EDuration="53.60401636s" podCreationTimestamp="2026-03-20 06:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.602688739 +0000 UTC m=+210.861999890" watchObservedRunningTime="2026-03-20 06:52:58.60401636 +0000 UTC m=+210.863327511" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.637457 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-g5hkc" podStartSLOduration=142.637443214 podStartE2EDuration="2m22.637443214s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.625574921 +0000 UTC m=+210.884886072" watchObservedRunningTime="2026-03-20 06:52:58.637443214 +0000 UTC m=+210.896754365" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.650174 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-pqsdt" podStartSLOduration=141.650156755 podStartE2EDuration="2m21.650156755s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.637735123 +0000 UTC m=+210.897046274" watchObservedRunningTime="2026-03-20 06:52:58.650156755 +0000 UTC m=+210.909467906" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.697746 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-pt4jb" podStartSLOduration=142.697729164 podStartE2EDuration="2m22.697729164s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.674676968 +0000 UTC m=+210.933988119" watchObservedRunningTime="2026-03-20 06:52:58.697729164 +0000 UTC m=+210.957040315" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.708404 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-dbsfs" podStartSLOduration=141.70838945 podStartE2EDuration="2m21.70838945s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:58.698160318 +0000 UTC m=+210.957471499" watchObservedRunningTime="2026-03-20 06:52:58.70838945 +0000 UTC m=+210.967700601" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990810 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990895 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990913 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990937 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Mar 20 06:52:58 crc kubenswrapper[5136]: I0320 06:52:58.990957 5136 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-20T06:52:58Z","lastTransitionTime":"2026-03-20T06:52:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.055876 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f"] Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.056762 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.059943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.059961 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.060194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.060509 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.077226 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=95.077203574 podStartE2EDuration="1m35.077203574s" podCreationTimestamp="2026-03-20 06:51:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.076248624 +0000 UTC m=+211.335559795" watchObservedRunningTime="2026-03-20 06:52:59.077203574 +0000 UTC m=+211.336514765" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.140744 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.140934 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac47cba1-1678-408b-9a4d-21d4f3e964ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.140990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac47cba1-1678-408b-9a4d-21d4f3e964ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.141082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac47cba1-1678-408b-9a4d-21d4f3e964ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.141314 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.177505 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=66.177485194 podStartE2EDuration="1m6.177485194s" podCreationTimestamp="2026-03-20 06:51:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.176122422 +0000 UTC m=+211.435433593" watchObservedRunningTime="2026-03-20 06:52:59.177485194 +0000 UTC m=+211.436796365" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.192900 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=124.19287727 podStartE2EDuration="2m4.19287727s" podCreationTimestamp="2026-03-20 06:50:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.192597221 +0000 UTC m=+211.451908412" watchObservedRunningTime="2026-03-20 06:52:59.19287727 +0000 UTC m=+211.452188431" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.205050 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=123.205028873 podStartE2EDuration="2m3.205028873s" podCreationTimestamp="2026-03-20 06:50:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:52:59.204377222 +0000 UTC m=+211.463688393" watchObservedRunningTime="2026-03-20 06:52:59.205028873 +0000 UTC m=+211.464340034" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242333 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242649 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac47cba1-1678-408b-9a4d-21d4f3e964ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242445 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242806 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac47cba1-1678-408b-9a4d-21d4f3e964ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242949 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac47cba1-1678-408b-9a4d-21d4f3e964ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.242991 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.243148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ac47cba1-1678-408b-9a4d-21d4f3e964ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.244977 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ac47cba1-1678-408b-9a4d-21d4f3e964ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.252035 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac47cba1-1678-408b-9a4d-21d4f3e964ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.264547 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ac47cba1-1678-408b-9a4d-21d4f3e964ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-pvf2f\" (UID: \"ac47cba1-1678-408b-9a4d-21d4f3e964ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.376383 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.397637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:52:59 crc kubenswrapper[5136]: E0320 06:52:59.398781 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.399092 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:52:59 crc kubenswrapper[5136]: E0320 06:52:59.399264 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nbmbh_openshift-ovn-kubernetes(963bf1ca-b871-4cad-a1fc-cf829a70a81a)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" Mar 20 06:52:59 crc kubenswrapper[5136]: W0320 06:52:59.404170 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac47cba1_1678_408b_9a4d_21d4f3e964ed.slice/crio-541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9 WatchSource:0}: Error finding container 541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9: Status 404 returned error can't find the container with id 541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9 Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.467495 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 20 06:52:59 crc kubenswrapper[5136]: I0320 06:52:59.475920 5136 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.353959 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" event={"ID":"ac47cba1-1678-408b-9a4d-21d4f3e964ed","Type":"ContainerStarted","Data":"e63724aa394bed1cccc97a353a6e9c1076266155e6b9f60bfdfbbf1a33f9cde4"} Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.354044 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" event={"ID":"ac47cba1-1678-408b-9a4d-21d4f3e964ed","Type":"ContainerStarted","Data":"541251c099eb0f8dfc80c6aec07ffaba9fdf728949a9a1d9c04949f3d0e4a3e9"} Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.374494 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-pvf2f" podStartSLOduration=143.374466931 podStartE2EDuration="2m23.374466931s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:00.373694286 +0000 UTC m=+212.633005477" watchObservedRunningTime="2026-03-20 06:53:00.374466931 +0000 UTC m=+212.633778122" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.396296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.396493 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.396320 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.396610 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.396311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.396696 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.455439 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.455679 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.455641389 +0000 UTC m=+334.714952570 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.456649 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.456853 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.457038 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.456954 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457334 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.457313901 +0000 UTC m=+334.716625052 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457082 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457376 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457390 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457433 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.457426355 +0000 UTC m=+334.716737506 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457169 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457502 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457535 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.457644 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.457616411 +0000 UTC m=+334.716927602 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 20 06:53:00 crc kubenswrapper[5136]: I0320 06:53:00.458065 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.458151 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:53:00 crc kubenswrapper[5136]: E0320 06:53:00.458318 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:55:02.458307753 +0000 UTC m=+334.717619124 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.359523 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360262 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/0.log" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360336 5136 generic.go:334] "Generic (PLEG): container finished" podID="263c5427-a835-40c6-93cb-4bb66a83ea5b" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" exitCode=1 Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360378 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerDied","Data":"1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644"} Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.360422 5136 scope.go:117] "RemoveContainer" containerID="84766a780aa0ce1bcd9ec467c49cca25aea6d24700dbdf031277979a6ce04c68" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.361047 5136 scope.go:117] "RemoveContainer" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" Mar 20 06:53:01 crc kubenswrapper[5136]: E0320 06:53:01.361311 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-tjpps_openshift-multus(263c5427-a835-40c6-93cb-4bb66a83ea5b)\"" pod="openshift-multus/multus-tjpps" podUID="263c5427-a835-40c6-93cb-4bb66a83ea5b" Mar 20 06:53:01 crc kubenswrapper[5136]: I0320 06:53:01.395707 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:01 crc kubenswrapper[5136]: E0320 06:53:01.395861 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.365506 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.395658 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.395662 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:02 crc kubenswrapper[5136]: E0320 06:53:02.395901 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:02 crc kubenswrapper[5136]: I0320 06:53:02.395923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:02 crc kubenswrapper[5136]: E0320 06:53:02.396067 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:02 crc kubenswrapper[5136]: E0320 06:53:02.396312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:03 crc kubenswrapper[5136]: I0320 06:53:03.396495 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:03 crc kubenswrapper[5136]: E0320 06:53:03.396736 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:03 crc kubenswrapper[5136]: E0320 06:53:03.514388 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:53:04 crc kubenswrapper[5136]: I0320 06:53:04.395997 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:04 crc kubenswrapper[5136]: I0320 06:53:04.396068 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:04 crc kubenswrapper[5136]: E0320 06:53:04.396125 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:04 crc kubenswrapper[5136]: E0320 06:53:04.396221 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:04 crc kubenswrapper[5136]: I0320 06:53:04.396293 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:04 crc kubenswrapper[5136]: E0320 06:53:04.396409 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:05 crc kubenswrapper[5136]: I0320 06:53:05.396441 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:05 crc kubenswrapper[5136]: E0320 06:53:05.396624 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:06 crc kubenswrapper[5136]: I0320 06:53:06.396020 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:06 crc kubenswrapper[5136]: I0320 06:53:06.396064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:06 crc kubenswrapper[5136]: E0320 06:53:06.396730 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:06 crc kubenswrapper[5136]: I0320 06:53:06.396095 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:06 crc kubenswrapper[5136]: E0320 06:53:06.396834 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:06 crc kubenswrapper[5136]: E0320 06:53:06.396954 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:07 crc kubenswrapper[5136]: I0320 06:53:07.395905 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:07 crc kubenswrapper[5136]: E0320 06:53:07.396077 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:08 crc kubenswrapper[5136]: I0320 06:53:08.396046 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:08 crc kubenswrapper[5136]: I0320 06:53:08.396155 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.396859 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:08 crc kubenswrapper[5136]: I0320 06:53:08.396909 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.396919 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.397064 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:08 crc kubenswrapper[5136]: E0320 06:53:08.514966 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:53:09 crc kubenswrapper[5136]: I0320 06:53:09.396469 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:09 crc kubenswrapper[5136]: E0320 06:53:09.396663 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:10 crc kubenswrapper[5136]: I0320 06:53:10.395722 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:10 crc kubenswrapper[5136]: I0320 06:53:10.395782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:10 crc kubenswrapper[5136]: I0320 06:53:10.395722 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:10 crc kubenswrapper[5136]: E0320 06:53:10.396008 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:10 crc kubenswrapper[5136]: E0320 06:53:10.396168 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:10 crc kubenswrapper[5136]: E0320 06:53:10.396692 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:11 crc kubenswrapper[5136]: I0320 06:53:11.395871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:11 crc kubenswrapper[5136]: E0320 06:53:11.396062 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.396077 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:12 crc kubenswrapper[5136]: E0320 06:53:12.396293 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.396113 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.396109 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:12 crc kubenswrapper[5136]: E0320 06:53:12.396451 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:12 crc kubenswrapper[5136]: E0320 06:53:12.397209 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:12 crc kubenswrapper[5136]: I0320 06:53:12.397517 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.306331 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jz6hg"] Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.306460 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:13 crc kubenswrapper[5136]: E0320 06:53:13.306603 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.413763 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.417909 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerStarted","Data":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.418352 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:53:13 crc kubenswrapper[5136]: I0320 06:53:13.447450 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podStartSLOduration=156.447433326 podStartE2EDuration="2m36.447433326s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:13.446265349 +0000 UTC m=+225.705576510" watchObservedRunningTime="2026-03-20 06:53:13.447433326 +0000 UTC m=+225.706744477" Mar 20 06:53:13 crc kubenswrapper[5136]: E0320 06:53:13.516051 5136 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 06:53:14 crc kubenswrapper[5136]: I0320 06:53:14.396342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:14 crc kubenswrapper[5136]: E0320 06:53:14.396474 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:14 crc kubenswrapper[5136]: I0320 06:53:14.396342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:14 crc kubenswrapper[5136]: I0320 06:53:14.396585 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:14 crc kubenswrapper[5136]: E0320 06:53:14.396664 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:14 crc kubenswrapper[5136]: E0320 06:53:14.396773 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:15 crc kubenswrapper[5136]: I0320 06:53:15.396248 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:15 crc kubenswrapper[5136]: E0320 06:53:15.396437 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:15 crc kubenswrapper[5136]: I0320 06:53:15.397156 5136 scope.go:117] "RemoveContainer" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.396607 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.396701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:16 crc kubenswrapper[5136]: E0320 06:53:16.396876 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.397037 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:16 crc kubenswrapper[5136]: E0320 06:53:16.397179 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:16 crc kubenswrapper[5136]: E0320 06:53:16.397312 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.431054 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 06:53:16 crc kubenswrapper[5136]: I0320 06:53:16.431143 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924"} Mar 20 06:53:17 crc kubenswrapper[5136]: I0320 06:53:17.395698 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:17 crc kubenswrapper[5136]: E0320 06:53:17.396003 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-jz6hg" podUID="b5572feb-df7d-4f3a-9b83-3be3de943668" Mar 20 06:53:18 crc kubenswrapper[5136]: I0320 06:53:18.396415 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:18 crc kubenswrapper[5136]: I0320 06:53:18.396523 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:18 crc kubenswrapper[5136]: E0320 06:53:18.398532 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:53:18 crc kubenswrapper[5136]: I0320 06:53:18.398565 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:18 crc kubenswrapper[5136]: E0320 06:53:18.398696 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:53:18 crc kubenswrapper[5136]: E0320 06:53:18.398805 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.368414 5136 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.416083 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.416720 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-274sn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.417080 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.417560 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.417908 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.418291 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.419116 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.419396 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rmdpp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.419967 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.422042 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.424719 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.425043 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.425927 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.426043 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.426197 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gjxt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.426742 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.428460 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.429104 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.431553 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.431956 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.432278 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.432488 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.432886 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.434895 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87cfr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.435381 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.435534 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.435721 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.436401 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.436490 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.437023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.443401 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.443658 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.443975 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.444172 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.444439 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.446637 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-djxmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.447184 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.447640 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.448178 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.448196 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vbv27"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.448769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.451618 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.452005 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbjpm"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.452715 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.453512 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.454104 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.463635 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.469561 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.472296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.486827 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.487170 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491088 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491220 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491337 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491425 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491703 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491875 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.491935 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492179 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492254 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492313 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.492353 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.493311 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-x4wkf"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.493596 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.498309 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.498646 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.498859 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499064 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499267 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499589 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.499806 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500071 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500278 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500510 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500634 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.500863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.501028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.501433 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.501644 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.502248 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.502455 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.502769 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503011 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503186 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503296 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503308 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503355 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503431 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503424 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503634 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503724 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503809 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.503922 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504009 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504163 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504184 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504190 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.504277 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.505140 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.505343 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.505869 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.506511 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.506840 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.507211 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.507221 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.508093 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.508338 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.508339 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.507258 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.511175 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.511652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.516419 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.516782 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.516972 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517000 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517327 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517752 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.517935 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.518441 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.518573 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.518603 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519285 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519384 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519452 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519521 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519778 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519872 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.519948 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520374 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520395 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520501 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.520658 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.523206 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.523669 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.535508 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.537180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.542258 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.542681 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.567153 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.568169 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.569292 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.569756 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570037 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570207 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570365 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.570486 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.572119 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.573415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.573737 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.575908 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.576184 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.577678 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.577980 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.578011 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.578045 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.582893 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.583541 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-glmlt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.584403 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jvq8j"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.584599 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.584967 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585256 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585491 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.585958 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586197 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586295 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586364 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586296 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586741 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.586923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587270 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587301 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de5bcbec-966a-4934-b21a-a459ab3eb7bc-serving-cert\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587342 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kgmj\" (UniqueName: \"kubernetes.io/projected/5f83cf2a-8b13-4536-bda7-b21bea494966-kube-api-access-7kgmj\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587362 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587438 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587463 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587859 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-service-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbj4h\" (UniqueName: \"kubernetes.io/projected/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-kube-api-access-gbj4h\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-config\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587950 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4vhj\" (UniqueName: \"kubernetes.io/projected/de5bcbec-966a-4934-b21a-a459ab3eb7bc-kube-api-access-h4vhj\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587968 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.587986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588015 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588032 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr7qj\" (UniqueName: \"kubernetes.io/projected/5491b0c6-578a-430a-82db-943e9c7778e5-kube-api-access-dr7qj\") pod \"downloads-7954f5f757-djxmj\" (UID: \"5491b0c6-578a-430a-82db-943e9c7778e5\") " pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588049 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-client\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588065 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwk4x\" (UniqueName: \"kubernetes.io/projected/62c9b093-fe6a-4484-844b-31bbb4f6b21a-kube-api-access-zwk4x\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588137 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588152 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588185 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588200 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de5bcbec-966a-4934-b21a-a459ab3eb7bc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltwh5\" (UniqueName: \"kubernetes.io/projected/2261aa95-8cc5-4fe7-9515-a065c381aa5b-kube-api-access-ltwh5\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-images\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588246 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82z9x\" (UniqueName: \"kubernetes.io/projected/a3ca072d-707e-4c94-9b3a-81eabc72f840-kube-api-access-82z9x\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588260 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-config\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588305 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588319 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ab617f-fa16-4ff5-ad90-328e952d31fb-serving-cert\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588332 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-trusted-ca\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588400 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5rgk\" (UniqueName: \"kubernetes.io/projected/f0ab617f-fa16-4ff5-ad90-328e952d31fb-kube-api-access-k5rgk\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588440 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mtj5k"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588503 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588553 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2261aa95-8cc5-4fe7-9515-a065c381aa5b-serving-cert\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588569 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbz86\" (UniqueName: \"kubernetes.io/projected/e358e5eb-5d33-4510-a9fd-4dff0323f61a-kube-api-access-cbz86\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.588595 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.088582879 +0000 UTC m=+232.347894030 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588612 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ca072d-707e-4c94-9b3a-81eabc72f840-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588648 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588664 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e358e5eb-5d33-4510-a9fd-4dff0323f61a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588683 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-config\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588696 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62c9b093-fe6a-4484-844b-31bbb4f6b21a-metrics-tls\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588713 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-serving-cert\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588755 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588770 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f83cf2a-8b13-4536-bda7-b21bea494966-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588804 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588857 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588878 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588890 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-config\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588910 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589019 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589437 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.588911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589559 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83cf2a-8b13-4536-bda7-b21bea494966-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589583 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e49af127-1dfc-4213-b763-a4283104f38f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589605 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e358e5eb-5d33-4510-a9fd-4dff0323f61a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589624 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589652 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589669 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmrhp\" (UniqueName: \"kubernetes.io/projected/e49af127-1dfc-4213-b763-a4283104f38f-kube-api-access-cmrhp\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589690 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589703 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrbcl\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-kube-api-access-jrbcl\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.589972 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mmm42"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.590547 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594459 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-274sn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594501 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rmdpp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594587 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gjxt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594617 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.594630 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzwlk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595419 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595443 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595515 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.595739 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.598653 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87cfr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.598996 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.600669 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vbv27"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.602459 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.609790 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.615329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.618353 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.619181 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.626433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.626503 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.626733 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.629186 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.630667 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.632529 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.633881 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-djxmj"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.635744 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.637883 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.639408 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.640353 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.643125 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.644848 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mmm42"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.646485 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-wnlnd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.647321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.648633 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.650096 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jvq8j"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.651943 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.652881 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vwn87"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.653976 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.654367 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-glmlt"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.655725 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mtj5k"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.657388 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbjpm"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.658870 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.660011 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.661759 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.662750 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wnlnd"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.664318 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vwn87"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.665403 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.666629 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzwlk"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.668212 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.669086 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-8mwfm"] Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.669790 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.679032 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690444 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690563 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de5bcbec-966a-4934-b21a-a459ab3eb7bc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-images\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690609 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82z9x\" (UniqueName: \"kubernetes.io/projected/a3ca072d-707e-4c94-9b3a-81eabc72f840-kube-api-access-82z9x\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690628 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690643 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-serving-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690664 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-serving-cert\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690678 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22cf75b6-1525-436a-9999-96f3b2393a03-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ab617f-fa16-4ff5-ad90-328e952d31fb-serving-cert\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690712 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-trusted-ca\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-machine-approver-tls\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690746 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/882e7562-0811-4a27-9e79-cae539acc27d-audit-dir\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690764 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7427ab-0805-477b-b064-f4258cef3ace-config\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690825 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22cf75b6-1525-436a-9999-96f3b2393a03-proxy-tls\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2261aa95-8cc5-4fe7-9515-a065c381aa5b-serving-cert\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690858 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbz86\" (UniqueName: \"kubernetes.io/projected/e358e5eb-5d33-4510-a9fd-4dff0323f61a-kube-api-access-cbz86\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690901 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-srv-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690917 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ca072d-707e-4c94-9b3a-81eabc72f840-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690953 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62c9b093-fe6a-4484-844b-31bbb4f6b21a-metrics-tls\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.690986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6490da1-20d4-4a12-bf24-50e24f3217dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691003 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691019 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-mountpoint-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83cf2a-8b13-4536-bda7-b21bea494966-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691068 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-config\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e49af127-1dfc-4213-b763-a4283104f38f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gflmz\" (UniqueName: \"kubernetes.io/projected/a437188c-af0a-415d-9b0e-9e5b66f41ea3-kube-api-access-gflmz\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e358e5eb-5d33-4510-a9fd-4dff0323f61a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691150 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxjlb\" (UniqueName: \"kubernetes.io/projected/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-kube-api-access-xxjlb\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691172 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691189 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cmrhp\" (UniqueName: \"kubernetes.io/projected/e49af127-1dfc-4213-b763-a4283104f38f-kube-api-access-cmrhp\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4krs5\" (UniqueName: \"kubernetes.io/projected/22cf75b6-1525-436a-9999-96f3b2393a03-kube-api-access-4krs5\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691631 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691647 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6490da1-20d4-4a12-bf24-50e24f3217dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691661 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6490da1-20d4-4a12-bf24-50e24f3217dc-config\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691695 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691710 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691727 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691745 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691764 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-plugins-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691790 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4vhj\" (UniqueName: \"kubernetes.io/projected/de5bcbec-966a-4934-b21a-a459ab3eb7bc-kube-api-access-h4vhj\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691806 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691848 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-node-pullsecrets\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-registration-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691880 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691897 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr7qj\" (UniqueName: \"kubernetes.io/projected/5491b0c6-578a-430a-82db-943e9c7778e5-kube-api-access-dr7qj\") pod \"downloads-7954f5f757-djxmj\" (UID: \"5491b0c6-578a-430a-82db-943e9c7778e5\") " pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691923 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691938 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a437188c-af0a-415d-9b0e-9e5b66f41ea3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691954 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-csi-data-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-audit-policies\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltwh5\" (UniqueName: \"kubernetes.io/projected/2261aa95-8cc5-4fe7-9515-a065c381aa5b-kube-api-access-ltwh5\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-encryption-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692067 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692099 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-config\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692130 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692147 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692163 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5rgk\" (UniqueName: \"kubernetes.io/projected/f0ab617f-fa16-4ff5-ad90-328e952d31fb-kube-api-access-k5rgk\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692179 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692333 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-config\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692363 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-auth-proxy-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntblj\" (UniqueName: \"kubernetes.io/projected/c87c53d2-e35b-43e3-910e-852b635c46b8-kube-api-access-ntblj\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692482 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-config\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692492 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692503 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e358e5eb-5d33-4510-a9fd-4dff0323f61a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692553 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-serving-cert\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692599 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692614 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-audit\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692639 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a437188c-af0a-415d-9b0e-9e5b66f41ea3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692668 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f83cf2a-8b13-4536-bda7-b21bea494966-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692693 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692713 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692735 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692753 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692771 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhbx4\" (UniqueName: \"kubernetes.io/projected/ebaac2a5-0001-4d47-9d55-8ff138364356-kube-api-access-qhbx4\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692827 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692859 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrbcl\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-kube-api-access-jrbcl\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692876 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4jmx\" (UniqueName: \"kubernetes.io/projected/11250cf1-2849-42f6-8a9c-85d673b4b097-kube-api-access-j4jmx\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692904 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-audit-dir\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692923 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de5bcbec-966a-4934-b21a-a459ab3eb7bc-serving-cert\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692939 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kgmj\" (UniqueName: \"kubernetes.io/projected/5f83cf2a-8b13-4536-bda7-b21bea494966-kube-api-access-7kgmj\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692956 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.692973 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693027 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693046 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-service-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693081 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhxhz\" (UniqueName: \"kubernetes.io/projected/882e7562-0811-4a27-9e79-cae539acc27d-kube-api-access-bhxhz\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/de5bcbec-966a-4934-b21a-a459ab3eb7bc-available-featuregates\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693433 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0ab617f-fa16-4ff5-ad90-328e952d31fb-trusted-ca\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.691403 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-images\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693860 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.693993 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.694606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.694854 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.695432 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbj4h\" (UniqueName: \"kubernetes.io/projected/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-kube-api-access-gbj4h\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696620 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-config\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696737 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.696976 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-client\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697058 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-images\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697122 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af7427ab-0805-477b-b064-f4258cef3ace-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697144 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af7427ab-0805-477b-b064-f4258cef3ace-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697172 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-trusted-ca-bundle\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697220 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697278 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-client\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697340 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5f83cf2a-8b13-4536-bda7-b21bea494966-config\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697886 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697970 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-config\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.697992 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698170 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-config\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698491 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698684 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-service-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.698971 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3ca072d-707e-4c94-9b3a-81eabc72f840-config\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwk4x\" (UniqueName: \"kubernetes.io/projected/62c9b093-fe6a-4484-844b-31bbb4f6b21a-kube-api-access-zwk4x\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699250 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d10c92de-8478-436b-bdc0-0fe231faf35c-proxy-tls\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699281 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699307 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699334 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699363 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699410 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699435 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-socket-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699490 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stkb4\" (UniqueName: \"kubernetes.io/projected/d10c92de-8478-436b-bdc0-0fe231faf35c-kube-api-access-stkb4\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699510 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.699583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.700367 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.700732 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.700777 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2261aa95-8cc5-4fe7-9515-a065c381aa5b-service-ca-bundle\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701047 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701062 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-ca\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.701164 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.201142135 +0000 UTC m=+232.460453386 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701555 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e358e5eb-5d33-4510-a9fd-4dff0323f61a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.701691 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e358e5eb-5d33-4510-a9fd-4dff0323f61a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.702530 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.702602 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/62c9b093-fe6a-4484-844b-31bbb4f6b21a-metrics-tls\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.703376 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0ab617f-fa16-4ff5-ad90-328e952d31fb-serving-cert\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.703732 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.703754 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f83cf2a-8b13-4536-bda7-b21bea494966-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704731 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.704835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/a3ca072d-707e-4c94-9b3a-81eabc72f840-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705031 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705087 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2261aa95-8cc5-4fe7-9515-a065c381aa5b-serving-cert\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705313 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/de5bcbec-966a-4934-b21a-a459ab3eb7bc-serving-cert\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.705771 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.706120 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.706793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.707279 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-etcd-client\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.708609 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-serving-cert\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.708768 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/e49af127-1dfc-4213-b763-a4283104f38f-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.719890 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.739392 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.759025 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.779396 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.799534 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800048 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-node-pullsecrets\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-registration-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800107 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800125 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800142 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a437188c-af0a-415d-9b0e-9e5b66f41ea3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-csi-data-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800189 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-audit-policies\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800194 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-node-pullsecrets\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-encryption-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800228 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800246 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800271 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-auth-proxy-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800284 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-csi-data-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800285 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntblj\" (UniqueName: \"kubernetes.io/projected/c87c53d2-e35b-43e3-910e-852b635c46b8-kube-api-access-ntblj\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800325 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800352 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-audit\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a437188c-af0a-415d-9b0e-9e5b66f41ea3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800436 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhbx4\" (UniqueName: \"kubernetes.io/projected/ebaac2a5-0001-4d47-9d55-8ff138364356-kube-api-access-qhbx4\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800460 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800494 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4jmx\" (UniqueName: \"kubernetes.io/projected/11250cf1-2849-42f6-8a9c-85d673b4b097-kube-api-access-j4jmx\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-audit-dir\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800560 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhxhz\" (UniqueName: \"kubernetes.io/projected/882e7562-0811-4a27-9e79-cae539acc27d-kube-api-access-bhxhz\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/11250cf1-2849-42f6-8a9c-85d673b4b097-audit-dir\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800608 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800655 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-registration-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-client\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800746 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-images\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800788 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af7427ab-0805-477b-b064-f4258cef3ace-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800869 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af7427ab-0805-477b-b064-f4258cef3ace-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-trusted-ca-bundle\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800935 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800965 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d10c92de-8478-436b-bdc0-0fe231faf35c-proxy-tls\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.800991 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801015 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-socket-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801069 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801094 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stkb4\" (UniqueName: \"kubernetes.io/projected/d10c92de-8478-436b-bdc0-0fe231faf35c-kube-api-access-stkb4\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801117 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-serving-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801137 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-socket-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801138 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-serving-cert\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22cf75b6-1525-436a-9999-96f3b2393a03-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801202 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-machine-approver-tls\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/882e7562-0811-4a27-9e79-cae539acc27d-audit-dir\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801253 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7427ab-0805-477b-b064-f4258cef3ace-config\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22cf75b6-1525-436a-9999-96f3b2393a03-proxy-tls\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801300 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-srv-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801330 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6490da1-20d4-4a12-bf24-50e24f3217dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801348 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801367 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-mountpoint-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801387 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gflmz\" (UniqueName: \"kubernetes.io/projected/a437188c-af0a-415d-9b0e-9e5b66f41ea3-kube-api-access-gflmz\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801406 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxjlb\" (UniqueName: \"kubernetes.io/projected/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-kube-api-access-xxjlb\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801469 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4krs5\" (UniqueName: \"kubernetes.io/projected/22cf75b6-1525-436a-9999-96f3b2393a03-kube-api-access-4krs5\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6490da1-20d4-4a12-bf24-50e24f3217dc-config\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6490da1-20d4-4a12-bf24-50e24f3217dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801551 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801571 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-plugins-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801640 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-images\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801651 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-plugins-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d10c92de-8478-436b-bdc0-0fe231faf35c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801674 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/882e7562-0811-4a27-9e79-cae539acc27d-audit-dir\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801687 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.801911 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/c87c53d2-e35b-43e3-910e-852b635c46b8-mountpoint-dir\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.802148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22cf75b6-1525-436a-9999-96f3b2393a03-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.802402 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a6490da1-20d4-4a12-bf24-50e24f3217dc-config\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.803115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.803303 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.303288635 +0000 UTC m=+232.562599856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.806752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a6490da1-20d4-4a12-bf24-50e24f3217dc-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.806799 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d10c92de-8478-436b-bdc0-0fe231faf35c-proxy-tls\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.806999 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/22cf75b6-1525-436a-9999-96f3b2393a03-proxy-tls\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.819496 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.839466 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.860236 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.879577 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.899316 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.902176 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.902348 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.402328537 +0000 UTC m=+232.661639688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.902427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:19 crc kubenswrapper[5136]: E0320 06:53:19.903034 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.403026528 +0000 UTC m=+232.662337679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.906041 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/af7427ab-0805-477b-b064-f4258cef3ace-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.918946 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.925083 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-srv-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.939724 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.944458 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.946698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ebaac2a5-0001-4d47-9d55-8ff138364356-profile-collector-cert\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.958797 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.963839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/af7427ab-0805-477b-b064-f4258cef3ace-config\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:19 crc kubenswrapper[5136]: I0320 06:53:19.979251 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.000452 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.004633 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.004750 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.504732024 +0000 UTC m=+232.764043185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.005554 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.005985 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.505967093 +0000 UTC m=+232.765278244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.019790 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.039981 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.064618 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.079796 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.099410 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.106297 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.106521 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.606494102 +0000 UTC m=+232.865805293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.106660 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.107170 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.607125371 +0000 UTC m=+232.866436522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.119750 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.126124 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.139233 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.144071 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.160033 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.162193 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.179657 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.199537 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.208184 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.209067 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.709038283 +0000 UTC m=+232.968349474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.219850 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.239612 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.260468 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.279945 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.285519 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a437188c-af0a-415d-9b0e-9e5b66f41ea3-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.299649 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.301954 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a437188c-af0a-415d-9b0e-9e5b66f41ea3-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.311339 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.311931 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.811906675 +0000 UTC m=+233.071217866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.320314 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.339574 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.359243 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.380359 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.385810 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-machine-approver-tls\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.396654 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.396663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.396961 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.399517 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.401895 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-auth-proxy-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.412567 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.412781 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.912748823 +0000 UTC m=+233.172060014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.413905 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.414589 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:20.914562911 +0000 UTC m=+233.173874102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.420399 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.421353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-config\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.440146 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.460867 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.480282 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.499780 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.516180 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.516530 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.016491063 +0000 UTC m=+233.275802214 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.516958 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.517653 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.017619309 +0000 UTC m=+233.276930650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.533236 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.540477 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.560194 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.580710 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.597662 5136 request.go:700] Waited for 1.01183552s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-apiserver/configmaps?fieldSelector=metadata.name%3Daudit-1&limit=500&resourceVersion=0 Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.599706 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.602335 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-audit\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.618921 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.619170 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.119138669 +0000 UTC m=+233.378449850 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.620018 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.620588 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.120565244 +0000 UTC m=+233.379876425 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.621451 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.637107 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-client\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.640466 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.650797 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-etcd-serving-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.659615 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.665605 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-serving-cert\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.680425 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.685385 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/11250cf1-2849-42f6-8a9c-85d673b4b097-encryption-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.700752 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.701643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-config\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.723600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.723903 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.223868819 +0000 UTC m=+233.483180020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.724804 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.725330 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.225306405 +0000 UTC m=+233.484617586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.730988 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.733515 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-trusted-ca-bundle\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.740404 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.759367 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.779713 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.782098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-audit-policies\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800538 5136 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800736 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume podName:d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.300700161 +0000 UTC m=+233.560011352 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume") pod "collect-profiles-29566485-n6252" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800803 5136 secret.go:188] Couldn't get secret openshift-oauth-apiserver/etcd-client: failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800918 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.300884857 +0000 UTC m=+233.560196038 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-client" (UniqueName: "kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.800983 5136 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/etcd-serving-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.801053 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.301022222 +0000 UTC m=+233.560333413 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etcd-serving-ca" (UniqueName: "kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802076 5136 secret.go:188] Couldn't get secret openshift-oauth-apiserver/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802183 5136 secret.go:188] Couldn't get secret openshift-oauth-apiserver/encryption-config-1: failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802227 5136 configmap.go:193] Couldn't get configMap openshift-apiserver/image-import-ca: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802331 5136 configmap.go:193] Couldn't get configMap openshift-oauth-apiserver/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802258 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302224599 +0000 UTC m=+233.561535780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802436 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302391574 +0000 UTC m=+233.561702765 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "encryption-config" (UniqueName: "kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802514 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca podName:11250cf1-2849-42f6-8a9c-85d673b4b097 nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302458566 +0000 UTC m=+233.561769757 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-import-ca" (UniqueName: "kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca") pod "apiserver-76f77b778f-glmlt" (UID: "11250cf1-2849-42f6-8a9c-85d673b4b097") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.802554 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle podName:882e7562-0811-4a27-9e79-cae539acc27d nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.302534789 +0000 UTC m=+233.561845970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle") pod "apiserver-7bbb656c7d-mq9hd" (UID: "882e7562-0811-4a27-9e79-cae539acc27d") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.803360 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.821496 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.826644 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.826970 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.326939668 +0000 UTC m=+233.586250859 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.827148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.827699 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.327678731 +0000 UTC m=+233.586989912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.839469 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.859803 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.880312 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.899749 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.919988 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.928960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.929140 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.429109648 +0000 UTC m=+233.688420829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.929468 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:20 crc kubenswrapper[5136]: E0320 06:53:20.930052 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.430035818 +0000 UTC m=+233.689346979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.940513 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.960327 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 20 06:53:20 crc kubenswrapper[5136]: I0320 06:53:20.979568 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.000169 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.020182 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.030609 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.030890 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.530857806 +0000 UTC m=+233.790168997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.031593 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.032081 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.532064763 +0000 UTC m=+233.791375954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.039458 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.059433 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.079888 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.100299 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.119614 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.132522 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.132756 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.632715895 +0000 UTC m=+233.892027086 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.133227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.133585 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.633571192 +0000 UTC m=+233.892882343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.140339 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.160943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.179958 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.200802 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.220037 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.234423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.234649 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.734618137 +0000 UTC m=+233.993929328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.235369 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.235868 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.735842115 +0000 UTC m=+233.995153306 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.240438 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.259995 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.280094 5136 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.301013 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.319803 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336447 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.336601 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.836562839 +0000 UTC m=+234.095874030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336669 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.336899 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337335 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337489 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.337620 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.837596312 +0000 UTC m=+234.096907503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.337785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.338252 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/11250cf1-2849-42f6-8a9c-85d673b4b097-image-import-ca\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.338486 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/882e7562-0811-4a27-9e79-cae539acc27d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.339060 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.339661 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.343426 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-encryption-config\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.346488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-serving-cert\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.348571 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/882e7562-0811-4a27-9e79-cae539acc27d-etcd-client\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.361411 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.380616 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.420243 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.438768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.438967 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.938940907 +0000 UTC m=+234.198252088 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.439427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.439729 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:21.939719341 +0000 UTC m=+234.199030492 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.440025 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.460189 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.480443 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.499498 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.521139 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.539566 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.539934 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.540182 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.040147127 +0000 UTC m=+234.299458328 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.540572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.540965 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.040951661 +0000 UTC m=+234.300262872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.579784 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.598149 5136 request.go:700] Waited for 1.927995265s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dnode-bootstrapper-token&limit=500&resourceVersion=0 Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.600739 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.620377 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.642221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.642621 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.142531813 +0000 UTC m=+234.401843014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.643480 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.644155 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.144098962 +0000 UTC m=+234.403410143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.669185 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82z9x\" (UniqueName: \"kubernetes.io/projected/a3ca072d-707e-4c94-9b3a-81eabc72f840-kube-api-access-82z9x\") pod \"machine-api-operator-5694c8668f-vbjpm\" (UID: \"a3ca072d-707e-4c94-9b3a-81eabc72f840\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.687628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cmrhp\" (UniqueName: \"kubernetes.io/projected/e49af127-1dfc-4213-b763-a4283104f38f-kube-api-access-cmrhp\") pod \"cluster-samples-operator-665b6dd947-gfft2\" (UID: \"e49af127-1dfc-4213-b763-a4283104f38f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.708117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"console-f9d7485db-bjqjp\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.729972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr7qj\" (UniqueName: \"kubernetes.io/projected/5491b0c6-578a-430a-82db-943e9c7778e5-kube-api-access-dr7qj\") pod \"downloads-7954f5f757-djxmj\" (UID: \"5491b0c6-578a-430a-82db-943e9c7778e5\") " pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.741059 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"oauth-openshift-558db77b4-6s42p\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.744321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.745453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.746592 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.246561992 +0000 UTC m=+234.505873193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.759082 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.766437 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.766634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltwh5\" (UniqueName: \"kubernetes.io/projected/2261aa95-8cc5-4fe7-9515-a065c381aa5b-kube-api-access-ltwh5\") pod \"authentication-operator-69f744f599-rmdpp\" (UID: \"2261aa95-8cc5-4fe7-9515-a065c381aa5b\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.775353 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.784980 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kgmj\" (UniqueName: \"kubernetes.io/projected/5f83cf2a-8b13-4536-bda7-b21bea494966-kube-api-access-7kgmj\") pod \"openshift-apiserver-operator-796bbdcf4f-2mt79\" (UID: \"5f83cf2a-8b13-4536-bda7-b21bea494966\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.804390 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4vhj\" (UniqueName: \"kubernetes.io/projected/de5bcbec-966a-4934-b21a-a459ab3eb7bc-kube-api-access-h4vhj\") pod \"openshift-config-operator-7777fb866f-274sn\" (UID: \"de5bcbec-966a-4934-b21a-a459ab3eb7bc\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.823981 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.848301 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.848940 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.348907327 +0000 UTC m=+234.608218558 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.850649 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.875941 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrbcl\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-kube-api-access-jrbcl\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.916041 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.916260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.935429 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbj4h\" (UniqueName: \"kubernetes.io/projected/a1dff0e1-4e1b-49cc-bc54-d157138a2d20-kube-api-access-gbj4h\") pod \"etcd-operator-b45778765-vbv27\" (UID: \"a1dff0e1-4e1b-49cc-bc54-d157138a2d20\") " pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.938836 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.941638 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/246c7ce4-1953-4a0c-9fed-cabc26f79f3f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-ckk2q\" (UID: \"246c7ce4-1953-4a0c-9fed-cabc26f79f3f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.944490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbz86\" (UniqueName: \"kubernetes.io/projected/e358e5eb-5d33-4510-a9fd-4dff0323f61a-kube-api-access-cbz86\") pod \"openshift-controller-manager-operator-756b6f6bc6-bt884\" (UID: \"e358e5eb-5d33-4510-a9fd-4dff0323f61a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.944583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5rgk\" (UniqueName: \"kubernetes.io/projected/f0ab617f-fa16-4ff5-ad90-328e952d31fb-kube-api-access-k5rgk\") pod \"console-operator-58897d9998-7gjxt\" (UID: \"f0ab617f-fa16-4ff5-ad90-328e952d31fb\") " pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.949700 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:21 crc kubenswrapper[5136]: E0320 06:53:21.950720 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.450701556 +0000 UTC m=+234.710012717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.959775 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwk4x\" (UniqueName: \"kubernetes.io/projected/62c9b093-fe6a-4484-844b-31bbb4f6b21a-kube-api-access-zwk4x\") pod \"dns-operator-744455d44c-87cfr\" (UID: \"62c9b093-fe6a-4484-844b-31bbb4f6b21a\") " pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.974043 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.977049 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntblj\" (UniqueName: \"kubernetes.io/projected/c87c53d2-e35b-43e3-910e-852b635c46b8-kube-api-access-ntblj\") pod \"csi-hostpathplugin-pzwlk\" (UID: \"c87c53d2-e35b-43e3-910e-852b635c46b8\") " pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:21 crc kubenswrapper[5136]: I0320 06:53:21.997106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.001225 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3289a6fb-5129-4ab6-b4b1-9d0b2d8af713-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-z7wgb\" (UID: \"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.011454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.018608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhbx4\" (UniqueName: \"kubernetes.io/projected/ebaac2a5-0001-4d47-9d55-8ff138364356-kube-api-access-qhbx4\") pod \"olm-operator-6b444d44fb-2h2rd\" (UID: \"ebaac2a5-0001-4d47-9d55-8ff138364356\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.037039 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.042525 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4jmx\" (UniqueName: \"kubernetes.io/projected/11250cf1-2849-42f6-8a9c-85d673b4b097-kube-api-access-j4jmx\") pod \"apiserver-76f77b778f-glmlt\" (UID: \"11250cf1-2849-42f6-8a9c-85d673b4b097\") " pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.051696 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.053432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.053880 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.553867947 +0000 UTC m=+234.813179098 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.060575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhxhz\" (UniqueName: \"kubernetes.io/projected/882e7562-0811-4a27-9e79-cae539acc27d-kube-api-access-bhxhz\") pod \"apiserver-7bbb656c7d-mq9hd\" (UID: \"882e7562-0811-4a27-9e79-cae539acc27d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.075237 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.075442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"route-controller-manager-6576b87f9c-m4btr\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.115017 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/af7427ab-0805-477b-b064-f4258cef3ace-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-k5mxn\" (UID: \"af7427ab-0805-477b-b064-f4258cef3ace\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.132393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.147008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stkb4\" (UniqueName: \"kubernetes.io/projected/d10c92de-8478-436b-bdc0-0fe231faf35c-kube-api-access-stkb4\") pod \"machine-config-operator-74547568cd-zmz56\" (UID: \"d10c92de-8478-436b-bdc0-0fe231faf35c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.148624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.148798 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.150225 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.155262 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.155729 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.655708647 +0000 UTC m=+234.915019798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.167928 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.173835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"collect-profiles-29566485-n6252\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.174249 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.176327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gflmz\" (UniqueName: \"kubernetes.io/projected/a437188c-af0a-415d-9b0e-9e5b66f41ea3-kube-api-access-gflmz\") pod \"kube-storage-version-migrator-operator-b67b599dd-6jcpg\" (UID: \"a437188c-af0a-415d-9b0e-9e5b66f41ea3\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.181562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.187381 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4krs5\" (UniqueName: \"kubernetes.io/projected/22cf75b6-1525-436a-9999-96f3b2393a03-kube-api-access-4krs5\") pod \"machine-config-controller-84d6567774-x6mlj\" (UID: \"22cf75b6-1525-436a-9999-96f3b2393a03\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.191249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6490da1-20d4-4a12-bf24-50e24f3217dc-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-tzhmj\" (UID: \"a6490da1-20d4-4a12-bf24-50e24f3217dc\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.194148 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.220365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxjlb\" (UniqueName: \"kubernetes.io/projected/ef0aa20b-fefd-4024-82fd-3b5e014ce1d7-kube-api-access-xxjlb\") pod \"machine-approver-56656f9798-4hr5m\" (UID: \"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.222540 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.224411 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.236899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.242136 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.254150 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.257411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.257740 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.757727102 +0000 UTC m=+235.017038253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.261104 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-vbjpm"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.261276 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.263744 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-djxmj"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.264873 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.279587 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.297223 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:22 crc kubenswrapper[5136]: W0320 06:53:22.305288 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5491b0c6_578a_430a_82db_943e9c7778e5.slice/crio-3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811 WatchSource:0}: Error finding container 3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811: Status 404 returned error can't find the container with id 3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811 Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359717 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359871 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9598b\" (UniqueName: \"kubernetes.io/projected/8b148c18-da73-4c17-85f7-454eebfe96f8-kube-api-access-9598b\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-stats-auth\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359915 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/06133c52-727b-4ded-b835-f0f71093b193-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359944 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359960 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.359997 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0541594-5780-4b00-a3c7-3b132a0cde9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360031 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cc8288-8479-40e0-bb0b-4aad0244d57d-config\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360080 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.360391 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.860363718 +0000 UTC m=+235.119674959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360743 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpfx7\" (UniqueName: \"kubernetes.io/projected/f3343084-9f31-46fb-8514-b5391882700a-kube-api-access-tpfx7\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360886 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2p4z\" (UniqueName: \"kubernetes.io/projected/dd410106-c7b7-4706-9b99-38e3597ee713-kube-api-access-z2p4z\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkpsr\" (UniqueName: \"kubernetes.io/projected/06133c52-727b-4ded-b835-f0f71093b193-kube-api-access-lkpsr\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360949 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4583d32-b996-4de0-a7a9-3f13086640a2-service-ca-bundle\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360969 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-webhook-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.360986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361025 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv4wr\" (UniqueName: \"kubernetes.io/projected/42cc8288-8479-40e0-bb0b-4aad0244d57d-kube-api-access-pv4wr\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361097 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2djxf\" (UniqueName: \"kubernetes.io/projected/b4583d32-b996-4de0-a7a9-3f13086640a2-kube-api-access-2djxf\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361137 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.361161 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.861141421 +0000 UTC m=+235.120452672 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-profile-collector-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361240 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldwsl\" (UniqueName: \"kubernetes.io/projected/ffd3e201-0817-43ed-b8db-d7b526017b69-kube-api-access-ldwsl\") pod \"migrator-59844c95c7-6ckd4\" (UID: \"ffd3e201-0817-43ed-b8db-d7b526017b69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3343084-9f31-46fb-8514-b5391882700a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd410106-c7b7-4706-9b99-38e3597ee713-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361328 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0541594-5780-4b00-a3c7-3b132a0cde9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361390 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-srv-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361602 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"auto-csr-approver-29566492-9gbqz\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361701 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-key\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.361862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-default-certificate\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7hr7\" (UniqueName: \"kubernetes.io/projected/1939ab6e-c688-43a4-bca6-7cc00e950962-kube-api-access-x7hr7\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvbht\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-kube-api-access-mvbht\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.362105 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-cabundle\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364595 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364658 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42cc8288-8479-40e0-bb0b-4aad0244d57d-serving-cert\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8b148c18-da73-4c17-85f7-454eebfe96f8-tmpfs\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364748 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzn76\" (UniqueName: \"kubernetes.io/projected/8eccc00e-2821-4e84-9040-6aa1e58daf78-kube-api-access-bzn76\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.364769 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-metrics-certs\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.437943 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.451571 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-rmdpp"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.451713 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465482 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/06133c52-727b-4ded-b835-f0f71093b193-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.465560 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.965539552 +0000 UTC m=+235.224850703 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465694 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0541594-5780-4b00-a3c7-3b132a0cde9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdmn8\" (UniqueName: \"kubernetes.io/projected/f179a691-95b5-4d8a-9f4f-48267b8587a7-kube-api-access-zdmn8\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465807 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdkkm\" (UniqueName: \"kubernetes.io/projected/e31fd981-67e5-461a-b43c-89a38265e7ed-kube-api-access-jdkkm\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465872 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cc8288-8479-40e0-bb0b-4aad0244d57d-config\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fd981-67e5-461a-b43c-89a38265e7ed-config-volume\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465952 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.465975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpfx7\" (UniqueName: \"kubernetes.io/projected/f3343084-9f31-46fb-8514-b5391882700a-kube-api-access-tpfx7\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466009 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466084 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2p4z\" (UniqueName: \"kubernetes.io/projected/dd410106-c7b7-4706-9b99-38e3597ee713-kube-api-access-z2p4z\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466137 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkpsr\" (UniqueName: \"kubernetes.io/projected/06133c52-727b-4ded-b835-f0f71093b193-kube-api-access-lkpsr\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466153 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4583d32-b996-4de0-a7a9-3f13086640a2-service-ca-bundle\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466169 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-webhook-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466199 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466226 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-node-bootstrap-token\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e31fd981-67e5-461a-b43c-89a38265e7ed-metrics-tls\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466279 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466318 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv4wr\" (UniqueName: \"kubernetes.io/projected/42cc8288-8479-40e0-bb0b-4aad0244d57d-kube-api-access-pv4wr\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466349 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2djxf\" (UniqueName: \"kubernetes.io/projected/b4583d32-b996-4de0-a7a9-3f13086640a2-kube-api-access-2djxf\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466375 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-certs\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466420 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466437 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-profile-collector-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466474 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldwsl\" (UniqueName: \"kubernetes.io/projected/ffd3e201-0817-43ed-b8db-d7b526017b69-kube-api-access-ldwsl\") pod \"migrator-59844c95c7-6ckd4\" (UID: \"ffd3e201-0817-43ed-b8db-d7b526017b69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466522 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3343084-9f31-46fb-8514-b5391882700a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466559 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd410106-c7b7-4706-9b99-38e3597ee713-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466575 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0541594-5780-4b00-a3c7-3b132a0cde9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-srv-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"auto-csr-approver-29566492-9gbqz\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-key\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466705 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22df33b0-12a4-40ed-b739-85240eb615e7-cert\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466735 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-default-certificate\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466779 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsfpw\" (UniqueName: \"kubernetes.io/projected/22df33b0-12a4-40ed-b739-85240eb615e7-kube-api-access-nsfpw\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7hr7\" (UniqueName: \"kubernetes.io/projected/1939ab6e-c688-43a4-bca6-7cc00e950962-kube-api-access-x7hr7\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466846 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvbht\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-kube-api-access-mvbht\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466918 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-cabundle\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466952 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.466987 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42cc8288-8479-40e0-bb0b-4aad0244d57d-serving-cert\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467044 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8b148c18-da73-4c17-85f7-454eebfe96f8-tmpfs\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzn76\" (UniqueName: \"kubernetes.io/projected/8eccc00e-2821-4e84-9040-6aa1e58daf78-kube-api-access-bzn76\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-metrics-certs\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467170 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9598b\" (UniqueName: \"kubernetes.io/projected/8b148c18-da73-4c17-85f7-454eebfe96f8-kube-api-access-9598b\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.467186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-stats-auth\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.468315 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.469889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4583d32-b996-4de0-a7a9-3f13086640a2-service-ca-bundle\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.470188 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/8b148c18-da73-4c17-85f7-454eebfe96f8-tmpfs\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.470791 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0541594-5780-4b00-a3c7-3b132a0cde9b-trusted-ca\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.470861 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.471421 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:22.971406507 +0000 UTC m=+235.230717658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.472393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/42cc8288-8479-40e0-bb0b-4aad0244d57d-config\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.472472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.473297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.473420 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-stats-auth\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.475876 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-cabundle\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.475919 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1939ab6e-c688-43a4-bca6-7cc00e950962-signing-key\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.476360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/06133c52-727b-4ded-b835-f0f71093b193-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.476900 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.477767 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/f3343084-9f31-46fb-8514-b5391882700a-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.478515 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-apiservice-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.486219 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.486303 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-metrics-certs\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.488838 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8b148c18-da73-4c17-85f7-454eebfe96f8-webhook-cert\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.489345 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/42cc8288-8479-40e0-bb0b-4aad0244d57d-serving-cert\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.490801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d0541594-5780-4b00-a3c7-3b132a0cde9b-metrics-tls\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.491118 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/dd410106-c7b7-4706-9b99-38e3597ee713-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.494673 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-srv-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.494751 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/b4583d32-b996-4de0-a7a9-3f13086640a2-default-certificate\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.496367 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8eccc00e-2821-4e84-9040-6aa1e58daf78-profile-collector-cert\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.497403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" event={"ID":"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7","Type":"ContainerStarted","Data":"6f972abf3e722d5aaab333938b3513d43785c466ec4ea4bb2bc0de16b861c464"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.505936 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-274sn"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.506039 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" event={"ID":"e49af127-1dfc-4213-b763-a4283104f38f","Type":"ContainerStarted","Data":"24d7d61298f331ce5005b0f443fb1e3b95ee217bcec80d7c1f66854a007d49f9"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.506874 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.508451 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-djxmj" event={"ID":"5491b0c6-578a-430a-82db-943e9c7778e5","Type":"ContainerStarted","Data":"a976903997ec15076b987fc47679fc1f389241cb6da3d09436a4536cc6ee6b64"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.508485 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-djxmj" event={"ID":"5491b0c6-578a-430a-82db-943e9c7778e5","Type":"ContainerStarted","Data":"3b5682a24840224c0c9b10b308c40afb67c185614903a5bc1d87967d741e3811"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.508797 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.510902 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerStarted","Data":"1705fa19bd8f5aa96bc704e7afa6e708e4641f98ce0af56ebad4536addf3960e"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.511955 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" event={"ID":"a3ca072d-707e-4c94-9b3a-81eabc72f840","Type":"ContainerStarted","Data":"6bde56da44b70546553466d69cdfb895655dbb9c29ecdea02349c9e84d6cfc3f"} Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.514008 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-7gjxt"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.515373 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"auto-csr-approver-29566492-9gbqz\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.531959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9598b\" (UniqueName: \"kubernetes.io/projected/8b148c18-da73-4c17-85f7-454eebfe96f8-kube-api-access-9598b\") pod \"packageserver-d55dfcdfc-2q7k6\" (UID: \"8b148c18-da73-4c17-85f7-454eebfe96f8\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.552558 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv4wr\" (UniqueName: \"kubernetes.io/projected/42cc8288-8479-40e0-bb0b-4aad0244d57d-kube-api-access-pv4wr\") pod \"service-ca-operator-777779d784-mmm42\" (UID: \"42cc8288-8479-40e0-bb0b-4aad0244d57d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.555115 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.555159 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:22 crc kubenswrapper[5136]: W0320 06:53:22.563218 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2261aa95_8cc5_4fe7_9515_a065c381aa5b.slice/crio-654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb WatchSource:0}: Error finding container 654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb: Status 404 returned error can't find the container with id 654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsfpw\" (UniqueName: \"kubernetes.io/projected/22df33b0-12a4-40ed-b739-85240eb615e7-kube-api-access-nsfpw\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdmn8\" (UniqueName: \"kubernetes.io/projected/f179a691-95b5-4d8a-9f4f-48267b8587a7-kube-api-access-zdmn8\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdkkm\" (UniqueName: \"kubernetes.io/projected/e31fd981-67e5-461a-b43c-89a38265e7ed-kube-api-access-jdkkm\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fd981-67e5-461a-b43c-89a38265e7ed-config-volume\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-node-bootstrap-token\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.567897 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e31fd981-67e5-461a-b43c-89a38265e7ed-metrics-tls\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.568191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-certs\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.568238 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22df33b0-12a4-40ed-b739-85240eb615e7-cert\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.568585 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e31fd981-67e5-461a-b43c-89a38265e7ed-config-volume\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.568760 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.068732124 +0000 UTC m=+235.328043295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.572676 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-certs\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.575185 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e31fd981-67e5-461a-b43c-89a38265e7ed-metrics-tls\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.577239 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22df33b0-12a4-40ed-b739-85240eb615e7-cert\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.577763 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f179a691-95b5-4d8a-9f4f-48267b8587a7-node-bootstrap-token\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.599339 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2p4z\" (UniqueName: \"kubernetes.io/projected/dd410106-c7b7-4706-9b99-38e3597ee713-kube-api-access-z2p4z\") pod \"control-plane-machine-set-operator-78cbb6b69f-j6ffq\" (UID: \"dd410106-c7b7-4706-9b99-38e3597ee713\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.612342 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.617760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"marketplace-operator-79b997595-mbfm4\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.646385 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.647264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkpsr\" (UniqueName: \"kubernetes.io/projected/06133c52-727b-4ded-b835-f0f71093b193-kube-api-access-lkpsr\") pod \"multus-admission-controller-857f4d67dd-jvq8j\" (UID: \"06133c52-727b-4ded-b835-f0f71093b193\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.651565 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.666212 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpfx7\" (UniqueName: \"kubernetes.io/projected/f3343084-9f31-46fb-8514-b5391882700a-kube-api-access-tpfx7\") pod \"package-server-manager-789f6589d5-rqppz\" (UID: \"f3343084-9f31-46fb-8514-b5391882700a\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.670757 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.671249 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.171234015 +0000 UTC m=+235.430545166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.673439 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzn76\" (UniqueName: \"kubernetes.io/projected/8eccc00e-2821-4e84-9040-6aa1e58daf78-kube-api-access-bzn76\") pod \"catalog-operator-68c6474976-xssjv\" (UID: \"8eccc00e-2821-4e84-9040-6aa1e58daf78\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.684870 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.699661 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2djxf\" (UniqueName: \"kubernetes.io/projected/b4583d32-b996-4de0-a7a9-3f13086640a2-kube-api-access-2djxf\") pod \"router-default-5444994796-x4wkf\" (UID: \"b4583d32-b996-4de0-a7a9-3f13086640a2\") " pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.715079 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldwsl\" (UniqueName: \"kubernetes.io/projected/ffd3e201-0817-43ed-b8db-d7b526017b69-kube-api-access-ldwsl\") pod \"migrator-59844c95c7-6ckd4\" (UID: \"ffd3e201-0817-43ed-b8db-d7b526017b69\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.749468 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"controller-manager-879f6c89f-jvzhk\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.766999 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.767621 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvbht\" (UniqueName: \"kubernetes.io/projected/d0541594-5780-4b00-a3c7-3b132a0cde9b-kube-api-access-mvbht\") pod \"ingress-operator-5b745b69d9-g7v2v\" (UID: \"d0541594-5780-4b00-a3c7-3b132a0cde9b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.772359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.772765 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.272750584 +0000 UTC m=+235.532061735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.779831 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7hr7\" (UniqueName: \"kubernetes.io/projected/1939ab6e-c688-43a4-bca6-7cc00e950962-kube-api-access-x7hr7\") pod \"service-ca-9c57cc56f-mtj5k\" (UID: \"1939ab6e-c688-43a4-bca6-7cc00e950962\") " pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.788301 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.805572 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.825258 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.827438 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-87cfr"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.829748 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsfpw\" (UniqueName: \"kubernetes.io/projected/22df33b0-12a4-40ed-b739-85240eb615e7-kube-api-access-nsfpw\") pod \"ingress-canary-vwn87\" (UID: \"22df33b0-12a4-40ed-b739-85240eb615e7\") " pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.843666 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdkkm\" (UniqueName: \"kubernetes.io/projected/e31fd981-67e5-461a-b43c-89a38265e7ed-kube-api-access-jdkkm\") pod \"dns-default-wnlnd\" (UID: \"e31fd981-67e5-461a-b43c-89a38265e7ed\") " pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.846312 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.864779 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.888135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdmn8\" (UniqueName: \"kubernetes.io/projected/f179a691-95b5-4d8a-9f4f-48267b8587a7-kube-api-access-zdmn8\") pod \"machine-config-server-8mwfm\" (UID: \"f179a691-95b5-4d8a-9f4f-48267b8587a7\") " pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.890553 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.891587 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.892026 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.392013164 +0000 UTC m=+235.651324315 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.892865 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.899465 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.905223 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.927665 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.954157 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.959036 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.962047 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-vbv27"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.968714 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-pzwlk"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.971164 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.971233 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.976890 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-glmlt"] Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.984463 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.990526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.992293 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.992442 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.992628 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.492610284 +0000 UTC m=+235.751921435 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:22 crc kubenswrapper[5136]: I0320 06:53:22.995338 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:22 crc kubenswrapper[5136]: E0320 06:53:22.995604 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.495592298 +0000 UTC m=+235.754903449 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:22.999348 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vwn87" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.007523 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-8mwfm" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.029621 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.047042 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.077179 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.083453 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.087609 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.104716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.105664 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.605640376 +0000 UTC m=+235.864951547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.105866 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.106205 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.606198034 +0000 UTC m=+235.865509185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.208483 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.210531 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.211715 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.711688349 +0000 UTC m=+235.970999500 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.212393 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.214771 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.714758265 +0000 UTC m=+235.974069406 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.221481 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.251964 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.254913 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.314707 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.315405 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.815389446 +0000 UTC m=+236.074700597 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.328173 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-mmm42"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.333473 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.335192 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.336368 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.394657 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.408392 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-mtj5k"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.417833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.420669 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:23.920655035 +0000 UTC m=+236.179966186 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.518551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.519121 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.019102268 +0000 UTC m=+236.278413419 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.520735 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" event={"ID":"ffd3e201-0817-43ed-b8db-d7b526017b69","Type":"ContainerStarted","Data":"f9c435b1361f4aa73cc446911a88f27572607f10d212cff6310986474dcd161e"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.531440 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" event={"ID":"dd410106-c7b7-4706-9b99-38e3597ee713","Type":"ContainerStarted","Data":"a0ce110363f63f001f95e50210a962eb678228594f1ac7e014264cd9f0806c96"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.544562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" event={"ID":"42cc8288-8479-40e0-bb0b-4aad0244d57d","Type":"ContainerStarted","Data":"c80732a9eb4856d211d0a8ceacc7f8b714f2228a260994c0779351420bd9d0bc"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.560561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" event={"ID":"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7","Type":"ContainerStarted","Data":"598f31a9e66b82a40e6fddae66878ad2c2398e7d79d5ccb145dad09f970accbe"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.560627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" event={"ID":"ef0aa20b-fefd-4024-82fd-3b5e014ce1d7","Type":"ContainerStarted","Data":"d34b3917abfbbe4a783befb983a39bd2afed047371e4e79761022e7cdf941b29"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.564461 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerStarted","Data":"e58d8ba4116f562c0c29dade18892c723e48babb4ac158f18e7e7f62c2685db2"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.567935 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" event={"ID":"22cf75b6-1525-436a-9999-96f3b2393a03","Type":"ContainerStarted","Data":"6bb5e1527d984418016c71f8416f2f0c7d5ade22bc293ba6a117fc47606500fd"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.570551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" event={"ID":"a6490da1-20d4-4a12-bf24-50e24f3217dc","Type":"ContainerStarted","Data":"5db55060d20d852f02c153307634b83636e43c71b86bffdfcc266d2ad3398a33"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.571531 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" event={"ID":"882e7562-0811-4a27-9e79-cae539acc27d","Type":"ContainerStarted","Data":"4827fb4e7d836e80237514799f011d13bd3cecb330a4419a8bb1261c16b45603"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.572702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" event={"ID":"e358e5eb-5d33-4510-a9fd-4dff0323f61a","Type":"ContainerStarted","Data":"e60454c1f81731705b4ba3dc416d89873cc63aa52dfd85ef9dc61c25012e5c21"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.573501 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" event={"ID":"2261aa95-8cc5-4fe7-9515-a065c381aa5b","Type":"ContainerStarted","Data":"3fe424078c970477ad8ebe6562f6b3b232c3d5300b93606ffc8a11a3e84a1f5f"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.573517 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" event={"ID":"2261aa95-8cc5-4fe7-9515-a065c381aa5b","Type":"ContainerStarted","Data":"654898de6202000e3542c5ff7a29d4dc1b9ece9e166152c2aa19f5d10b9b55eb"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.577240 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerStarted","Data":"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.577261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerStarted","Data":"df39f87d48bdc4108cfbbd23c050e3dcecc77d5d9cf9eff9e81e1a0106f177c3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.577693 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.578402 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerStarted","Data":"3094e3e51ca50cd738f21abc71147878b8b084370b1ac58f9d3d57419cc269e4"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581036 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" event={"ID":"f0ab617f-fa16-4ff5-ad90-328e952d31fb","Type":"ContainerStarted","Data":"a4448870c6cb456232422ba7a723d64a0b4ec045ffd860fb62efe247fbcbf8a4"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581058 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" event={"ID":"f0ab617f-fa16-4ff5-ad90-328e952d31fb","Type":"ContainerStarted","Data":"742bb152e88fa346f9d3b1bc753b8452eea78b7a4cca847119ee6686fa16e0e7"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581577 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.581927 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" event={"ID":"62c9b093-fe6a-4484-844b-31bbb4f6b21a","Type":"ContainerStarted","Data":"42b5f734c5b9ca1316f645936059541daf3fe186114025cb4fc14e8310c86195"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.582762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"0358dbe9fe3db61891eed24111a66c38993609435b78293e8eaf8af29fdf7324"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.582957 5136 patch_prober.go:28] interesting pod/console-operator-58897d9998-7gjxt container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.582986 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" podUID="f0ab617f-fa16-4ff5-ad90-328e952d31fb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.584690 5136 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-6s42p container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.584745 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.588129 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerStarted","Data":"6e4d56e84d0e4688ac8677a97bc75a219910aeea20b1d12dc228013a436922f3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.593058 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" event={"ID":"ebaac2a5-0001-4d47-9d55-8ff138364356","Type":"ContainerStarted","Data":"fbcf5c5f625e2ed06dffb55171a7e2b2e24cb675a14f792fee9f0be1f9faeed4"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.594524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" event={"ID":"af7427ab-0805-477b-b064-f4258cef3ace","Type":"ContainerStarted","Data":"7885db517f939ea607039c0ddd0a81609a16502d05d6704ad32a0b3165db7ecc"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.601231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" event={"ID":"e49af127-1dfc-4213-b763-a4283104f38f","Type":"ContainerStarted","Data":"22bf4e5ebeb37f84bd92b2e2813992ed7bb28491de0ca07513340ef547839e00"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.601269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" event={"ID":"e49af127-1dfc-4213-b763-a4283104f38f","Type":"ContainerStarted","Data":"cb5eadcc27168192635505790744ba41275f4c41315da7278d313b5e06c7924f"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.602111 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerStarted","Data":"07da4e107a7ee6b904be95db1fd6b4beceb4d8ed54972900d21d82ae0100b768"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.604409 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" event={"ID":"5f83cf2a-8b13-4536-bda7-b21bea494966","Type":"ContainerStarted","Data":"eca0407830eed44c4f91baab34017521547657d1ce75fb6d0333f68c340719b6"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.604553 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" event={"ID":"5f83cf2a-8b13-4536-bda7-b21bea494966","Type":"ContainerStarted","Data":"2b63c02749b4ff52bdafe3b161226eb31a8f0572774e8b5d1505df5835a2109d"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.606211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" event={"ID":"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713","Type":"ContainerStarted","Data":"1f8bb7528bcd674ada09b3ecb220cf2f80827c5340131f4615206beea3f0b2b6"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.606945 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" event={"ID":"d0541594-5780-4b00-a3c7-3b132a0cde9b","Type":"ContainerStarted","Data":"84afc305742688cc02836b4a533c51fe1003b758b7eb485230faa6c42e624c3b"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.607725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" event={"ID":"a437188c-af0a-415d-9b0e-9e5b66f41ea3","Type":"ContainerStarted","Data":"cf6e67e61d17f2b7c25746fa58ae35594ce1fcf236d35ffd8a5a9091f71bc489"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.609492 5136 generic.go:334] "Generic (PLEG): container finished" podID="de5bcbec-966a-4934-b21a-a459ab3eb7bc" containerID="45368ec78c24e1437817b0124ca474de0300b9efd63638fb548ad9b8c674e448" exitCode=0 Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.609539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" event={"ID":"de5bcbec-966a-4934-b21a-a459ab3eb7bc","Type":"ContainerDied","Data":"45368ec78c24e1437817b0124ca474de0300b9efd63638fb548ad9b8c674e448"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.609555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" event={"ID":"de5bcbec-966a-4934-b21a-a459ab3eb7bc","Type":"ContainerStarted","Data":"6eb13fc102482510057d7a3e68e222a10a051546686f4d258231548876af1570"} Mar 20 06:53:23 crc kubenswrapper[5136]: W0320 06:53:23.610921 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4583d32_b996_4de0_a7a9_3f13086640a2.slice/crio-1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0 WatchSource:0}: Error finding container 1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0: Status 404 returned error can't find the container with id 1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0 Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.612173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" event={"ID":"a3ca072d-707e-4c94-9b3a-81eabc72f840","Type":"ContainerStarted","Data":"1f4189f045ae2aca8b168aff9fae0641bbbd4d4ba32863b2e4a93d6d8ce9d1f3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.612198 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" event={"ID":"a3ca072d-707e-4c94-9b3a-81eabc72f840","Type":"ContainerStarted","Data":"5666e6a7a8c40f224d03d32021f4ab6bbb2a35ed335823806087a5d0d8c0e49c"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.613958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" event={"ID":"a1dff0e1-4e1b-49cc-bc54-d157138a2d20","Type":"ContainerStarted","Data":"186be6371f7e6de9733e46c8382ac36e1e88865fdf462dab21c0c1e0c3c4b946"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.614593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" event={"ID":"d10c92de-8478-436b-bdc0-0fe231faf35c","Type":"ContainerStarted","Data":"7c8708c89211de774fe69cbbde740b1d14bcd87bd62504d19fad3906617cc6cf"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.616017 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" event={"ID":"8b148c18-da73-4c17-85f7-454eebfe96f8","Type":"ContainerStarted","Data":"8a997786fc0c66a6a40628c921501baeb9d2ee3f045da955946a24f19ccdf924"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.617794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerStarted","Data":"8ab9396d1b0bd00b43015624038265fccd12c5928575d3620513f24c6d495ec3"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.619039 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" event={"ID":"246c7ce4-1953-4a0c-9fed-cabc26f79f3f","Type":"ContainerStarted","Data":"b3048d5cbd327c31d2c12352fc9831d3d60de45f5c32a9d9232ae18a248453d5"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.619537 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.619883 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.119872344 +0000 UTC m=+236.379183495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.622466 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.622493 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.622522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerStarted","Data":"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a"} Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.720260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.720392 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.220372261 +0000 UTC m=+236.479683422 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.721225 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.726707 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.22668978 +0000 UTC m=+236.486000931 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.740172 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.746637 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-jvq8j"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.782059 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-wnlnd"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.804254 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vwn87"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.806657 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv"] Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.823243 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.823484 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.32346018 +0000 UTC m=+236.582771341 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.823978 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.825779 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.325766122 +0000 UTC m=+236.585077273 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.834896 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:23 crc kubenswrapper[5136]: W0320 06:53:23.846216 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf3343084_9f31_46fb_8514_b5391882700a.slice/crio-d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094 WatchSource:0}: Error finding container d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094: Status 404 returned error can't find the container with id d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094 Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.926721 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.926879 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.426861369 +0000 UTC m=+236.686172520 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:23 crc kubenswrapper[5136]: I0320 06:53:23.927376 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:23 crc kubenswrapper[5136]: E0320 06:53:23.927663 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.427654684 +0000 UTC m=+236.686965835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.034507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.034722 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.534697728 +0000 UTC m=+236.794008879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.035858 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.036152 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.536142113 +0000 UTC m=+236.795453264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.136843 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.136982 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.6369527 +0000 UTC m=+236.896263861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.137095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.137467 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.637456176 +0000 UTC m=+236.896767417 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.242907 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.243684 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.743668004 +0000 UTC m=+237.002979155 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.276624 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" podStartSLOduration=167.276601821 podStartE2EDuration="2m47.276601821s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.275213418 +0000 UTC m=+236.534524569" watchObservedRunningTime="2026-03-20 06:53:24.276601821 +0000 UTC m=+236.535912972" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.312825 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-djxmj" podStartSLOduration=167.312791152 podStartE2EDuration="2m47.312791152s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.310675716 +0000 UTC m=+236.569986887" watchObservedRunningTime="2026-03-20 06:53:24.312791152 +0000 UTC m=+236.572102303" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.345463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.345937 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.845797992 +0000 UTC m=+237.105109143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.412557 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-vbjpm" podStartSLOduration=167.412534206 podStartE2EDuration="2m47.412534206s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.358066359 +0000 UTC m=+236.617377510" watchObservedRunningTime="2026-03-20 06:53:24.412534206 +0000 UTC m=+236.671845357" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.415695 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-2mt79" podStartSLOduration=168.415686975 podStartE2EDuration="2m48.415686975s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.401629932 +0000 UTC m=+236.660941083" watchObservedRunningTime="2026-03-20 06:53:24.415686975 +0000 UTC m=+236.674998126" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.432516 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gfft2" podStartSLOduration=167.432501705 podStartE2EDuration="2m47.432501705s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.431851605 +0000 UTC m=+236.691162756" watchObservedRunningTime="2026-03-20 06:53:24.432501705 +0000 UTC m=+236.691812856" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.446236 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.446645 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:24.9466316 +0000 UTC m=+237.205942751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.480971 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59394: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.483018 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" podStartSLOduration=168.483002556 podStartE2EDuration="2m48.483002556s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.482603535 +0000 UTC m=+236.741914686" watchObservedRunningTime="2026-03-20 06:53:24.483002556 +0000 UTC m=+236.742313707" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.549502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.549828 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.049800333 +0000 UTC m=+237.309111484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.563388 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-4hr5m" podStartSLOduration=168.56337225 podStartE2EDuration="2m48.56337225s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.515252683 +0000 UTC m=+236.774563834" watchObservedRunningTime="2026-03-20 06:53:24.56337225 +0000 UTC m=+236.822683401" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.591676 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59396: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.635856 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-rmdpp" podStartSLOduration=168.635798443 podStartE2EDuration="2m48.635798443s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.634405508 +0000 UTC m=+236.893716659" watchObservedRunningTime="2026-03-20 06:53:24.635798443 +0000 UTC m=+236.895109614" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.674575 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.675014 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.174998818 +0000 UTC m=+237.434309969 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.683503 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-bjqjp" podStartSLOduration=167.683487956 podStartE2EDuration="2m47.683487956s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.682756702 +0000 UTC m=+236.942067853" watchObservedRunningTime="2026-03-20 06:53:24.683487956 +0000 UTC m=+236.942799117" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.684265 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59404: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.722167 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vwn87" event={"ID":"22df33b0-12a4-40ed-b739-85240eb615e7","Type":"ContainerStarted","Data":"6cf3da78a059636f87f1671b6084b4b1433074b3a2936ed60fa3505b1d65ae6e"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.779658 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59420: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.780835 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.781187 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.281173104 +0000 UTC m=+237.540484255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.789641 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mwfm" event={"ID":"f179a691-95b5-4d8a-9f4f-48267b8587a7","Type":"ContainerStarted","Data":"0f3c0b6af995be637e4fce9b2a07c264e92da35e1c58785c07511a1e1c88fd71"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.806140 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" event={"ID":"d10c92de-8478-436b-bdc0-0fe231faf35c","Type":"ContainerStarted","Data":"1b0693c3f9f454fae42bc57cb3a4a88e3443ac311557e5494f894e21066aa033"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.813726 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" event={"ID":"f3343084-9f31-46fb-8514-b5391882700a","Type":"ContainerStarted","Data":"d14aebe384dee05a2f75d06ca638439bc26f535fd08e2a7e48ab58bf7aefe094"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.825006 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" event={"ID":"a437188c-af0a-415d-9b0e-9e5b66f41ea3","Type":"ContainerStarted","Data":"565846fe77d2ea3bc210e26fd8a30a612e89ef72fc426febe9ef6b899e40a7ce"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.826740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-x4wkf" event={"ID":"b4583d32-b996-4de0-a7a9-3f13086640a2","Type":"ContainerStarted","Data":"1a3c085caf9656e5555545e3c017d027a963769fbf4b120333ad54301bd9c6a0"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.834582 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" event={"ID":"ebaac2a5-0001-4d47-9d55-8ff138364356","Type":"ContainerStarted","Data":"0d984c5a7bc11a44a6c7f8d45353c41f4072a48ab7d51af7e13bc1c65e18ab3a"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.842221 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" event={"ID":"af7427ab-0805-477b-b064-f4258cef3ace","Type":"ContainerStarted","Data":"5616158dc7182557225c3e0b26c91144b02da9b106af89bcff6fea6f07227981"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.858765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerStarted","Data":"702c928592059046850fb0c9bb71fb8788e55d41bf051a0e0c5227c4a0538c5b"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.859502 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.860542 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-8mwfm" podStartSLOduration=5.860500795 podStartE2EDuration="5.860500795s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.812200482 +0000 UTC m=+237.071511633" watchObservedRunningTime="2026-03-20 06:53:24.860500795 +0000 UTC m=+237.119811946" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.861328 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6jcpg" podStartSLOduration=167.861319 podStartE2EDuration="2m47.861319s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.858483561 +0000 UTC m=+237.117794712" watchObservedRunningTime="2026-03-20 06:53:24.861319 +0000 UTC m=+237.120630151" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.866880 5136 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jvzhk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.866920 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.868173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" event={"ID":"a6490da1-20d4-4a12-bf24-50e24f3217dc","Type":"ContainerStarted","Data":"40d12e27b1b19ae666e0ac71be2a2c5717c7ddd546f4dcae698bd602abad54b9"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.872150 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" event={"ID":"de5bcbec-966a-4934-b21a-a459ab3eb7bc","Type":"ContainerStarted","Data":"38d1a1bce50aeb2082776397275c0bdad339eeac616164dd2e3f1bc765c49965"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.872758 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.891555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" event={"ID":"42cc8288-8479-40e0-bb0b-4aad0244d57d","Type":"ContainerStarted","Data":"c4e8486ad2e7ad22d118a8367ae962d3d1cd87e42f1f0e39bcfedffc2be4a27c"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.892535 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.893649 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.393632609 +0000 UTC m=+237.652943760 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.908756 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-k5mxn" podStartSLOduration=167.908733914 podStartE2EDuration="2m47.908733914s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.907023021 +0000 UTC m=+237.166334172" watchObservedRunningTime="2026-03-20 06:53:24.908733914 +0000 UTC m=+237.168045065" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.909614 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-x4wkf" podStartSLOduration=167.909606932 podStartE2EDuration="2m47.909606932s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.879293767 +0000 UTC m=+237.138604928" watchObservedRunningTime="2026-03-20 06:53:24.909606932 +0000 UTC m=+237.168918083" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.910771 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59426: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.913849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" event={"ID":"62c9b093-fe6a-4484-844b-31bbb4f6b21a","Type":"ContainerStarted","Data":"40651d8394309da2c5382af6bf09911a60224a8b222be416700281c8f9b80d53"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.915575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" event={"ID":"e358e5eb-5d33-4510-a9fd-4dff0323f61a","Type":"ContainerStarted","Data":"3fc945b86d83faded0b9a2ca420cea83c82c4b443395115773ea8f3c7b43628b"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.949645 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" podStartSLOduration=167.949627844 podStartE2EDuration="2m47.949627844s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.948274061 +0000 UTC m=+237.207585242" watchObservedRunningTime="2026-03-20 06:53:24.949627844 +0000 UTC m=+237.208938995" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.959405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" event={"ID":"22cf75b6-1525-436a-9999-96f3b2393a03","Type":"ContainerStarted","Data":"8622ea42bb57461814d407a2a8e7f7209b99e81fadf2196cd346b13f9f089973"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.962992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" event={"ID":"3289a6fb-5129-4ab6-b4b1-9d0b2d8af713","Type":"ContainerStarted","Data":"25576b641af1731e1c251801ae920fcd50b8835923991de3496e7405af6a72bf"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.969648 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-tzhmj" podStartSLOduration=167.969628534 podStartE2EDuration="2m47.969628534s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:24.96856189 +0000 UTC m=+237.227873041" watchObservedRunningTime="2026-03-20 06:53:24.969628534 +0000 UTC m=+237.228939685" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.972435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerStarted","Data":"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.972919 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.982654 5136 generic.go:334] "Generic (PLEG): container finished" podID="11250cf1-2849-42f6-8a9c-85d673b4b097" containerID="d9321dfa634b6bc2734db2b183b6c78c25a56c8ef93219d4d5e5dc7ecacb1c58" exitCode=0 Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.982954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerDied","Data":"d9321dfa634b6bc2734db2b183b6c78c25a56c8ef93219d4d5e5dc7ecacb1c58"} Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.989647 5136 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-m4btr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.989702 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": dial tcp 10.217.0.19:8443: connect: connection refused" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.993980 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.994740 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:24 crc kubenswrapper[5136]: I0320 06:53:24.996994 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59428: no serving certificate available for the kubelet" Mar 20 06:53:24 crc kubenswrapper[5136]: E0320 06:53:24.998639 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.498623598 +0000 UTC m=+237.757934739 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.014867 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.014922 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.022825 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerStarted","Data":"f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.029545 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" event={"ID":"ffd3e201-0817-43ed-b8db-d7b526017b69","Type":"ContainerStarted","Data":"3106adddc41afc7b8db88ae34a9fb8c71e28aca084635561785f766119cb834b"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.040876 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podStartSLOduration=168.040857989 podStartE2EDuration="2m48.040857989s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.015070826 +0000 UTC m=+237.274381977" watchObservedRunningTime="2026-03-20 06:53:25.040857989 +0000 UTC m=+237.300169140" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.041455 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" podStartSLOduration=168.041451338 podStartE2EDuration="2m48.041451338s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.040233409 +0000 UTC m=+237.299544580" watchObservedRunningTime="2026-03-20 06:53:25.041451338 +0000 UTC m=+237.300762489" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.062778 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-bt884" podStartSLOduration=168.062752639 podStartE2EDuration="2m48.062752639s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.061530281 +0000 UTC m=+237.320841432" watchObservedRunningTime="2026-03-20 06:53:25.062752639 +0000 UTC m=+237.322063790" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.078002 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-mmm42" podStartSLOduration=168.077987939 podStartE2EDuration="2m48.077987939s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.07676551 +0000 UTC m=+237.336076661" watchObservedRunningTime="2026-03-20 06:53:25.077987939 +0000 UTC m=+237.337299080" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.089486 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59440: no serving certificate available for the kubelet" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.094896 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.097078 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.59706067 +0000 UTC m=+237.856371821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.098604 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wnlnd" event={"ID":"e31fd981-67e5-461a-b43c-89a38265e7ed","Type":"ContainerStarted","Data":"a7d39f953abeee5ac104f8980375277b414efea4f416a8bff30af37b4d8c51c6"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.130952 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" podStartSLOduration=168.130928348 podStartE2EDuration="2m48.130928348s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.125664472 +0000 UTC m=+237.384975633" watchObservedRunningTime="2026-03-20 06:53:25.130928348 +0000 UTC m=+237.390239499" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.142164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" event={"ID":"1939ab6e-c688-43a4-bca6-7cc00e950962","Type":"ContainerStarted","Data":"e83b244ef8edca84b91c240039546d07171084e496028cd867d04d340981a8dd"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.163929 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" event={"ID":"8eccc00e-2821-4e84-9040-6aa1e58daf78","Type":"ContainerStarted","Data":"bc41415d3716cc2fa98d513a3b60344edcee0f515d37ef98cc23fa90327db594"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.164784 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.172929 5136 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xssjv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.172988 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" podUID="8eccc00e-2821-4e84-9040-6aa1e58daf78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.178368 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-z7wgb" podStartSLOduration=168.178343442 podStartE2EDuration="2m48.178343442s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.173121228 +0000 UTC m=+237.432432379" watchObservedRunningTime="2026-03-20 06:53:25.178343442 +0000 UTC m=+237.437654603" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.196049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.196833 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.696802663 +0000 UTC m=+237.956113814 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.201226 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" event={"ID":"246c7ce4-1953-4a0c-9fed-cabc26f79f3f","Type":"ContainerStarted","Data":"849d112c18adfaccb4466a6f4badc0f57b72985be4ccd37077a0b537003ae902"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.220518 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" event={"ID":"06133c52-727b-4ded-b835-f0f71093b193","Type":"ContainerStarted","Data":"1ceeb933aba0f0a739131b6cf23737870cd7bd2e3ace7afb1399cc116882334e"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.250428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" event={"ID":"8b148c18-da73-4c17-85f7-454eebfe96f8","Type":"ContainerStarted","Data":"e550a3e89ff736fb6666fbb000a205d11130250397d0ca330e687eab246a4564"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.250895 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254289 5136 generic.go:334] "Generic (PLEG): container finished" podID="882e7562-0811-4a27-9e79-cae539acc27d" containerID="7fc677eeb46308adb83e089c297439260e9b3e3a9580290e821aa973ad178f55" exitCode=0 Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" event={"ID":"882e7562-0811-4a27-9e79-cae539acc27d","Type":"ContainerDied","Data":"7fc677eeb46308adb83e089c297439260e9b3e3a9580290e821aa973ad178f55"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254846 5136 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2q7k6 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.254883 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" podUID="8b148c18-da73-4c17-85f7-454eebfe96f8" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.44:5443/healthz\": dial tcp 10.217.0.44:5443: connect: connection refused" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.257900 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" event={"ID":"a1dff0e1-4e1b-49cc-bc54-d157138a2d20","Type":"ContainerStarted","Data":"36881e40ed9dbdf6faaa683ead03eebc8f7c1e93b9589f5f2639a702726803ec"} Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.262959 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" podStartSLOduration=169.262932688 podStartE2EDuration="2m49.262932688s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.259848551 +0000 UTC m=+237.519159702" watchObservedRunningTime="2026-03-20 06:53:25.262932688 +0000 UTC m=+237.522243839" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.269982 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59442: no serving certificate available for the kubelet" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.274178 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.293963 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" podStartSLOduration=168.293942276 podStartE2EDuration="2m48.293942276s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.293680517 +0000 UTC m=+237.552991668" watchObservedRunningTime="2026-03-20 06:53:25.293942276 +0000 UTC m=+237.553253427" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.297623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.299440 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.799418838 +0000 UTC m=+238.058729999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.323506 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-7gjxt" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.357730 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" podStartSLOduration=168.357699265 podStartE2EDuration="2m48.357699265s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.345149199 +0000 UTC m=+237.604460350" watchObservedRunningTime="2026-03-20 06:53:25.357699265 +0000 UTC m=+237.617010416" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.381254 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-vbv27" podStartSLOduration=168.381230027 podStartE2EDuration="2m48.381230027s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.37913066 +0000 UTC m=+237.638441801" watchObservedRunningTime="2026-03-20 06:53:25.381230027 +0000 UTC m=+237.640541178" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.406888 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.407205 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:25.907189564 +0000 UTC m=+238.166500725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.434066 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" podStartSLOduration=168.433405471 podStartE2EDuration="2m48.433405471s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.423060485 +0000 UTC m=+237.682371646" watchObservedRunningTime="2026-03-20 06:53:25.433405471 +0000 UTC m=+237.692716622" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.510033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.510505 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.01048856 +0000 UTC m=+238.269799711 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.552404 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" podStartSLOduration=168.552388431 podStartE2EDuration="2m48.552388431s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.546594429 +0000 UTC m=+237.805905580" watchObservedRunningTime="2026-03-20 06:53:25.552388431 +0000 UTC m=+237.811699582" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.611268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.611574 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.111562316 +0000 UTC m=+238.370873467 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.616741 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-ckk2q" podStartSLOduration=168.616724078 podStartE2EDuration="2m48.616724078s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.612605869 +0000 UTC m=+237.871917020" watchObservedRunningTime="2026-03-20 06:53:25.616724078 +0000 UTC m=+237.876035229" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.712150 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.712608 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.21258861 +0000 UTC m=+238.471899761 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.745773 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" podStartSLOduration=168.745758115 podStartE2EDuration="2m48.745758115s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:25.744690182 +0000 UTC m=+238.004001333" watchObservedRunningTime="2026-03-20 06:53:25.745758115 +0000 UTC m=+238.005069266" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.813898 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.814529 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.314518393 +0000 UTC m=+238.573829544 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.915216 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:25 crc kubenswrapper[5136]: E0320 06:53:25.915675 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.415655321 +0000 UTC m=+238.674966472 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.956077 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59450: no serving certificate available for the kubelet" Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.992398 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Mar 20 06:53:25 crc kubenswrapper[5136]: I0320 06:53:25.992493 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.016696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.017122 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.517107837 +0000 UTC m=+238.776418988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.119143 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.119404 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.619386551 +0000 UTC m=+238.878697702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.119556 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.119963 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.619952669 +0000 UTC m=+238.879263820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.220192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.220372 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.720346893 +0000 UTC m=+238.979658044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.266727 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-6ckd4" event={"ID":"ffd3e201-0817-43ed-b8db-d7b526017b69","Type":"ContainerStarted","Data":"de42b813a9b449b3fd59c8b332d93332799fe0e9b37f2f6e98f5edca7f1abfa3"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.269503 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-x4wkf" event={"ID":"b4583d32-b996-4de0-a7a9-3f13086640a2","Type":"ContainerStarted","Data":"9097fc1d229ba4b794548200c6312a0fadfa3054180af9444d9b6211395ba870"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.271736 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" event={"ID":"d0541594-5780-4b00-a3c7-3b132a0cde9b","Type":"ContainerStarted","Data":"197e6791e991fbc1f12d2a1bfac284641926e87d4218fbe44784fef266f7fa91"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.271766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" event={"ID":"d0541594-5780-4b00-a3c7-3b132a0cde9b","Type":"ContainerStarted","Data":"0e2f28811520b1756f9957490750f23662ecf6ad5743a91eff89c3cace3618ba"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.273598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-x6mlj" event={"ID":"22cf75b6-1525-436a-9999-96f3b2393a03","Type":"ContainerStarted","Data":"ba85f0916340422f3af0b1d49eaa0156561d698041c34648084eaafc75eb78e9"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.278241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerStarted","Data":"69aa73b9c14529e02678faa87a73fe136d358701fc09e03d905281d5657dc3e2"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.280170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" event={"ID":"d10c92de-8478-436b-bdc0-0fe231faf35c","Type":"ContainerStarted","Data":"097c9e2b09fac404f71076e2d5b34157d5201f023221a76f6839900a1a11ce82"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.284352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-8mwfm" event={"ID":"f179a691-95b5-4d8a-9f4f-48267b8587a7","Type":"ContainerStarted","Data":"49725466cdca9eb8282305588245949042014deb6aed48d138a2c60381277f42"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.286512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wnlnd" event={"ID":"e31fd981-67e5-461a-b43c-89a38265e7ed","Type":"ContainerStarted","Data":"3ed8cc7f5a2c13811280e34603442cce191f2b4f7a438aa9482226ab0f77cf21"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.286540 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-wnlnd" event={"ID":"e31fd981-67e5-461a-b43c-89a38265e7ed","Type":"ContainerStarted","Data":"1eb72cfbaf35a14e31f7b041ff4416a50cb7d67dfae99a9dae429642a39da5d7"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.286646 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.287863 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" event={"ID":"8eccc00e-2821-4e84-9040-6aa1e58daf78","Type":"ContainerStarted","Data":"3356c91ae440af57cec371047e496cdbb53a694c0305ed1fe1de971c76b6679d"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.288570 5136 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-xssjv container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" start-of-body= Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.288671 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" podUID="8eccc00e-2821-4e84-9040-6aa1e58daf78" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.25:8443/healthz\": dial tcp 10.217.0.25:8443: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.291897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerStarted","Data":"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.292353 5136 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-jvzhk container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.292387 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.293471 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-g7v2v" podStartSLOduration=169.293458278 podStartE2EDuration="2m49.293458278s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.290131023 +0000 UTC m=+238.549442174" watchObservedRunningTime="2026-03-20 06:53:26.293458278 +0000 UTC m=+238.552769429" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.293593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerStarted","Data":"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.293785 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.295387 5136 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mbfm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.295431 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.297615 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"d3b38a9cf0f247ed5baf85944c58830195e8acdcea08686f6dfc0e85d2046aa9"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.309014 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" event={"ID":"f3343084-9f31-46fb-8514-b5391882700a","Type":"ContainerStarted","Data":"bf4db6f60271831db831751595237f8c2e2b9b91d20c2d33a55580bd7e1a983f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.309060 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" event={"ID":"f3343084-9f31-46fb-8514-b5391882700a","Type":"ContainerStarted","Data":"afc3c45a18f0d7a0ed66d1251fe5653fdfdf6a0a4f25ce210fb40e3d9de6086f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.309681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.319613 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vwn87" event={"ID":"22df33b0-12a4-40ed-b739-85240eb615e7","Type":"ContainerStarted","Data":"2e0466445eaab41d677f8725ec8a908d975a4e9a68bb54543470286a58641578"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.322097 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.325014 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.824999171 +0000 UTC m=+239.084310322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.334350 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" event={"ID":"06133c52-727b-4ded-b835-f0f71093b193","Type":"ContainerStarted","Data":"3d2eb4167ccecde174236c9b509a7a5356f0051d54cc3bd4b2afa69c2aa73612"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.334393 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" event={"ID":"06133c52-727b-4ded-b835-f0f71093b193","Type":"ContainerStarted","Data":"ce09a0e40142a205a97a81d2852860aae8c37bea9efb89833d948a81ae6e385f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.335626 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-zmz56" podStartSLOduration=169.335612946 podStartE2EDuration="2m49.335612946s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.326331354 +0000 UTC m=+238.585642505" watchObservedRunningTime="2026-03-20 06:53:26.335612946 +0000 UTC m=+238.594924097" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.336160 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-wnlnd" podStartSLOduration=7.336152253 podStartE2EDuration="7.336152253s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.305353822 +0000 UTC m=+238.564664973" watchObservedRunningTime="2026-03-20 06:53:26.336152253 +0000 UTC m=+238.595463404" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.342641 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-mtj5k" event={"ID":"1939ab6e-c688-43a4-bca6-7cc00e950962","Type":"ContainerStarted","Data":"503affd5af773d5a3fd2f8ae26ab7e2e5e115d2eb468df2924f388f2f072a25d"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.357588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" event={"ID":"62c9b093-fe6a-4484-844b-31bbb4f6b21a","Type":"ContainerStarted","Data":"c40a893b7fdd886014f148ea246672b8196fd290fa3b16d9dfb39bd1b0417dce"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.362424 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" event={"ID":"dd410106-c7b7-4706-9b99-38e3597ee713","Type":"ContainerStarted","Data":"03ce6707867516f8e304a9f71c030446299ebddf31e3aeaaecd43d7f6b52ae1f"} Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.368381 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.376148 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-2h2rd" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.407568 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" podStartSLOduration=169.407551203 podStartE2EDuration="2m49.407551203s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.406242762 +0000 UTC m=+238.665553913" watchObservedRunningTime="2026-03-20 06:53:26.407551203 +0000 UTC m=+238.666862354" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.408271 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podStartSLOduration=169.408267896 podStartE2EDuration="2m49.408267896s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.375775491 +0000 UTC m=+238.635086672" watchObservedRunningTime="2026-03-20 06:53:26.408267896 +0000 UTC m=+238.667579047" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.427213 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.427601 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:26.927586065 +0000 UTC m=+239.186897216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.447832 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vwn87" podStartSLOduration=7.447799562 podStartE2EDuration="7.447799562s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.427778061 +0000 UTC m=+238.687089212" watchObservedRunningTime="2026-03-20 06:53:26.447799562 +0000 UTC m=+238.707110713" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.475022 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-j6ffq" podStartSLOduration=169.475000259 podStartE2EDuration="2m49.475000259s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.451181668 +0000 UTC m=+238.710492819" watchObservedRunningTime="2026-03-20 06:53:26.475000259 +0000 UTC m=+238.734311410" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.512920 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.518494 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-87cfr" podStartSLOduration=169.518478779 podStartE2EDuration="2m49.518478779s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.518051876 +0000 UTC m=+238.777363027" watchObservedRunningTime="2026-03-20 06:53:26.518478779 +0000 UTC m=+238.777789930" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.531457 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.540744 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.0407287 +0000 UTC m=+239.300039941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.602907 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-jvq8j" podStartSLOduration=169.60289137 podStartE2EDuration="2m49.60289137s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:26.559031408 +0000 UTC m=+238.818342559" watchObservedRunningTime="2026-03-20 06:53:26.60289137 +0000 UTC m=+238.862202521" Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.633985 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.634117 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.134089393 +0000 UTC m=+239.393400544 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.645462 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.645830 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.145803633 +0000 UTC m=+239.405114784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.747730 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.747927 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.24789686 +0000 UTC m=+239.507208011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.748190 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.748608 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.248598182 +0000 UTC m=+239.507909333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.849846 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.850041 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.350013939 +0000 UTC m=+239.609325090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.850112 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.850433 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.350424422 +0000 UTC m=+239.609735573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.950986 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:26 crc kubenswrapper[5136]: E0320 06:53:26.951331 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.451313531 +0000 UTC m=+239.710624682 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.997368 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:26 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:26 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:26 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:26 crc kubenswrapper[5136]: I0320 06:53:26.997428 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.050171 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2q7k6" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.052684 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.053071 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.553055327 +0000 UTC m=+239.812366468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.153982 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.154359 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.65434051 +0000 UTC m=+239.913651661 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.255388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.255713 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.755696764 +0000 UTC m=+240.015007905 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.282335 5136 ???:1] "http: TLS handshake error from 192.168.126.11:59464: no serving certificate available for the kubelet" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.356409 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.356578 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.856543263 +0000 UTC m=+240.115854424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.356702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.357118 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.857108921 +0000 UTC m=+240.116420162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.369673 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" event={"ID":"882e7562-0811-4a27-9e79-cae539acc27d","Type":"ContainerStarted","Data":"b4ca4391ad090959d17b15512acbc469c9bf5ade364d5f25b05995c49fe2a254"} Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.374422 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" event={"ID":"11250cf1-2849-42f6-8a9c-85d673b4b097","Type":"ContainerStarted","Data":"b051bc02d93acd983a645fe07f1756f87ee9c08737c671a69f6cc0f73d15e4a8"} Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.374920 5136 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-mbfm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" start-of-body= Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.374956 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.21:8080/healthz\": dial tcp 10.217.0.21:8080: connect: connection refused" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.396624 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.406359 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" podStartSLOduration=170.406342032 podStartE2EDuration="2m50.406342032s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:27.401225311 +0000 UTC m=+239.660536462" watchObservedRunningTime="2026-03-20 06:53:27.406342032 +0000 UTC m=+239.665653183" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.407447 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.423560 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-xssjv" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.428701 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" podStartSLOduration=171.428690577 podStartE2EDuration="2m51.428690577s" podCreationTimestamp="2026-03-20 06:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:27.426589501 +0000 UTC m=+239.685900652" watchObservedRunningTime="2026-03-20 06:53:27.428690577 +0000 UTC m=+239.688001728" Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.457828 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.459082 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:27.959068204 +0000 UTC m=+240.218379355 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.529950 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.562607 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.562989 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.062978169 +0000 UTC m=+240.322289320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.663703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.672085 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.172059687 +0000 UTC m=+240.431370838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.766223 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.766658 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.266640819 +0000 UTC m=+240.525951970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.866945 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.867307 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.367290961 +0000 UTC m=+240.626602112 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.968895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:27 crc kubenswrapper[5136]: E0320 06:53:27.969169 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.469159231 +0000 UTC m=+240.728470382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.995615 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:27 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:27 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:27 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:27 crc kubenswrapper[5136]: I0320 06:53:27.995668 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.059847 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-274sn" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.069353 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.069629 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.569616527 +0000 UTC m=+240.828927678 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.171451 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.171926 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.671909591 +0000 UTC m=+240.931220742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.272902 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.273352 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.773332288 +0000 UTC m=+241.032643439 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.375004 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.375926 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.87590465 +0000 UTC m=+241.135215881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.394934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"e00eea1e1296704aa22ae062a966e757810f9611d6ea36d0a732f0f106a5ccfb"} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.394974 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"85d5e92076ab7de5105e64e5d492ed95d0daf0ce775744ec0b6f965c57817c55"} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.431030 5136 generic.go:334] "Generic (PLEG): container finished" podID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerID="f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36" exitCode=0 Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.434324 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" containerID="cri-o://3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" gracePeriod=30 Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.434638 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerDied","Data":"f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36"} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.475915 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.476680 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-03-20 06:53:28.976663916 +0000 UTC m=+241.235975067 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.529587 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.579931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.579947 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: E0320 06:53:28.580214 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-03-20 06:53:29.080200579 +0000 UTC m=+241.339511730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-fk4pl" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.581537 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.592740 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.606681 5136 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.659348 5136 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-03-20T06:53:28.606705824Z","Handler":null,"Name":""} Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.672540 5136 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.672588 5136 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.683338 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.698495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.733702 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.734688 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.740951 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.751059 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785130 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.785329 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.789551 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.789590 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.853694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-fk4pl\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888867 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888930 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.888990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.889033 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.889055 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.889970 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.890210 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.922201 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.941744 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.944522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"certified-operators-gnspw\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.947825 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.949907 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.954461 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.956867 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990508 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990613 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990718 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.990901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.991019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.992101 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:28 crc kubenswrapper[5136]: I0320 06:53:28.992475 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.002525 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:29 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.003667 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.045758 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"community-operators-hjck6\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.049473 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.058766 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.091838 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092199 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092298 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") pod \"edd610c6-14f6-4da1-83ab-b816dac3ed91\" (UID: \"edd610c6-14f6-4da1-83ab-b816dac3ed91\") " Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092468 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092490 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.092991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.094498 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca" (OuterVolumeSpecName: "client-ca") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.094931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config" (OuterVolumeSpecName: "config") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.096145 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv" (OuterVolumeSpecName: "kube-api-access-6sxkv") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "kube-api-access-6sxkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.096412 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.101350 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "edd610c6-14f6-4da1-83ab-b816dac3ed91" (UID: "edd610c6-14f6-4da1-83ab-b816dac3ed91"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.127586 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"certified-operators-5cc6n\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.135940 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:53:29 crc kubenswrapper[5136]: E0320 06:53:29.136201 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.136212 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.136309 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerName="route-controller-manager" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.137474 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.145696 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.193650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.193768 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.193805 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208048 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edd610c6-14f6-4da1-83ab-b816dac3ed91-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208110 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208126 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxkv\" (UniqueName: \"kubernetes.io/projected/edd610c6-14f6-4da1-83ab-b816dac3ed91-kube-api-access-6sxkv\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.208147 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd610c6-14f6-4da1-83ab-b816dac3ed91-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.312432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.312532 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.312590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.313035 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.313066 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.320188 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.332566 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"community-operators-tk985\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.465386 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.495914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" event={"ID":"c87c53d2-e35b-43e3-910e-852b635c46b8","Type":"ContainerStarted","Data":"7eef52a069f0d052b8d42c0b5b34ef56aa6a9c48d0b6b389711f7c437f53aa39"} Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.500379 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.501022 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.540897 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.573299 5136 generic.go:334] "Generic (PLEG): container finished" podID="edd610c6-14f6-4da1-83ab-b816dac3ed91" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" exitCode=0 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.573597 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.581663 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-pzwlk" podStartSLOduration=10.581648212 podStartE2EDuration="10.581648212s" podCreationTimestamp="2026-03-20 06:53:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:29.581202868 +0000 UTC m=+241.840514019" watchObservedRunningTime="2026-03-20 06:53:29.581648212 +0000 UTC m=+241.840959363" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.582029 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerDied","Data":"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8"} Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.582079 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr" event={"ID":"edd610c6-14f6-4da1-83ab-b816dac3ed91","Type":"ContainerDied","Data":"6e4d56e84d0e4688ac8677a97bc75a219910aeea20b1d12dc228013a436922f3"} Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.582282 5136 scope.go:117] "RemoveContainer" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.583123 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" containerID="cri-o://353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" gracePeriod=30 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.626383 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636164 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.636196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.654513 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.672385 5136 scope.go:117] "RemoveContainer" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" Mar 20 06:53:29 crc kubenswrapper[5136]: E0320 06:53:29.675966 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8\": container with ID starting with 3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8 not found: ID does not exist" containerID="3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.676004 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8"} err="failed to get container status \"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8\": rpc error: code = NotFound desc = could not find container \"3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8\": container with ID starting with 3b7b5b56fd70fb895f43d3b98f863ec3e958380c72d3b4cb6ab94f82b28cfdc8 not found: ID does not exist" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.703186 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.708877 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-m4btr"] Mar 20 06:53:29 crc kubenswrapper[5136]: W0320 06:53:29.710862 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16ee2b48_5dea_48c6_888a_ae52ff44afa4.slice/crio-2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012 WatchSource:0}: Error finding container 2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012: Status 404 returned error can't find the container with id 2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.711021 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739471 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.739492 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.740286 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.741342 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.747288 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.771265 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"route-controller-manager-859847c56f-k7qx9\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.858471 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.880595 5136 ???:1] "http: TLS handshake error from 192.168.126.11:48490: no serving certificate available for the kubelet" Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.917586 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:29 crc kubenswrapper[5136]: W0320 06:53:29.927326 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5f9659e_73fb_4389_8d6e_b739dfa94d4b.slice/crio-e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7 WatchSource:0}: Error finding container e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7: Status 404 returned error can't find the container with id e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7 Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.985550 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.994838 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:29 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:29 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:29 crc kubenswrapper[5136]: I0320 06:53:29.994889 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.058939 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.148366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") pod \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.148636 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") pod \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.148688 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") pod \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\" (UID: \"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.149422 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume" (OuterVolumeSpecName: "config-volume") pod "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.153935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.154078 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw" (OuterVolumeSpecName: "kube-api-access-4mdvw") pod "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" (UID: "d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8"). InnerVolumeSpecName "kube-api-access-4mdvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.154339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.201151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250514 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250546 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250602 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") pod \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\" (UID: \"6ac92b4e-38e5-4858-8b93-41afb63e9cdd\") " Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250865 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mdvw\" (UniqueName: \"kubernetes.io/projected/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-kube-api-access-4mdvw\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250877 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.250895 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: W0320 06:53:30.250970 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444d6afe_1b85_4b31_92c1_06272dd19195.slice/crio-a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec WatchSource:0}: Error finding container a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec: Status 404 returned error can't find the container with id a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.251243 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.251359 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca" (OuterVolumeSpecName: "client-ca") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.251400 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config" (OuterVolumeSpecName: "config") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.254728 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk" (OuterVolumeSpecName: "kube-api-access-gglvk") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "kube-api-access-gglvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.255580 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6ac92b4e-38e5-4858-8b93-41afb63e9cdd" (UID: "6ac92b4e-38e5-4858-8b93-41afb63e9cdd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352359 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352686 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gglvk\" (UniqueName: \"kubernetes.io/projected/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-kube-api-access-gglvk\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352697 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352707 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.352715 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6ac92b4e-38e5-4858-8b93-41afb63e9cdd-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.406291 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.406981 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd610c6-14f6-4da1-83ab-b816dac3ed91" path="/var/lib/kubelet/pods/edd610c6-14f6-4da1-83ab-b816dac3ed91/volumes" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583011 5136 generic.go:334] "Generic (PLEG): container finished" podID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583090 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerDied","Data":"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" event={"ID":"6ac92b4e-38e5-4858-8b93-41afb63e9cdd","Type":"ContainerDied","Data":"702c928592059046850fb0c9bb71fb8788e55d41bf051a0e0c5227c4a0538c5b"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583152 5136 scope.go:117] "RemoveContainer" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.583711 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-jvzhk" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.585108 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.585177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.585194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerStarted","Data":"3e121a671baa07140a3d1cad1e8e105a436e8a55fb9911545361353494c2ebed"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.590922 5136 generic.go:334] "Generic (PLEG): container finished" podID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.591082 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.591166 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerStarted","Data":"a72e48c3682399c05912ec6fcf4bd3347709282c92c6d1cf4cee81749234bee6"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.595357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerStarted","Data":"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.595407 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerStarted","Data":"2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.595434 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.601133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerStarted","Data":"f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.601170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerStarted","Data":"a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.602790 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.608282 5136 scope.go:117] "RemoveContainer" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" Mar 20 06:53:30 crc kubenswrapper[5136]: E0320 06:53:30.608923 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae\": container with ID starting with 353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae not found: ID does not exist" containerID="353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.608975 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae"} err="failed to get container status \"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae\": rpc error: code = NotFound desc = could not find container \"353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae\": container with ID starting with 353893ea2de7505f1f5161d17b865c2e6035f8c773b381d1a90958aacc3e8eae not found: ID does not exist" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.610820 5136 generic.go:334] "Generic (PLEG): container finished" podID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.610865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.610903 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerStarted","Data":"e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.613343 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" event={"ID":"d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8","Type":"ContainerDied","Data":"07da4e107a7ee6b904be95db1fd6b4beceb4d8ed54972900d21d82ae0100b768"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.613373 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07da4e107a7ee6b904be95db1fd6b4beceb4d8ed54972900d21d82ae0100b768" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.613434 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.617767 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" exitCode=0 Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.617924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.617955 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerStarted","Data":"0fb7591931908d54cba79b40ec6231538415c3ba45eb54605b5ec4b5dd387ac9"} Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.621586 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.623056 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-jvzhk"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.657971 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" podStartSLOduration=173.657925354 podStartE2EDuration="2m53.657925354s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:30.653568826 +0000 UTC m=+242.912879977" watchObservedRunningTime="2026-03-20 06:53:30.657925354 +0000 UTC m=+242.917236505" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.688784 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" podStartSLOduration=2.688766346 podStartE2EDuration="2.688766346s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:30.670981435 +0000 UTC m=+242.930292586" watchObservedRunningTime="2026-03-20 06:53:30.688766346 +0000 UTC m=+242.948077497" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720382 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:53:30 crc kubenswrapper[5136]: E0320 06:53:30.720600 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerName="collect-profiles" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720615 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerName="collect-profiles" Mar 20 06:53:30 crc kubenswrapper[5136]: E0320 06:53:30.720631 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720639 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720733 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" containerName="collect-profiles" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.720747 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" containerName="controller-manager" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.721411 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.724071 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.732100 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.758599 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.758846 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.758951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860099 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860178 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860311 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860646 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.860946 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.879583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"redhat-marketplace-zvjw4\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.949094 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.950280 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.952992 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.954894 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 06:53:30 crc kubenswrapper[5136]: I0320 06:53:30.954975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.003278 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.005612 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:31 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.005663 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.038137 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.053976 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.054604 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.059517 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.059645 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.071165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.103823 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.103858 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.124260 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.125952 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.129946 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205076 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205135 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205161 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205185 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205201 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.205536 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.235583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307170 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307311 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307378 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.307704 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.308281 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.312008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.313871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.324125 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.334509 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"redhat-marketplace-h56wl\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.403791 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.441248 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.491794 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.498724 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.499336 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506416 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506700 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506786 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.506802 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.507100 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.507577 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.511844 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.532612 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612336 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612424 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.612497 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.714221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.715367 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.720499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.720998 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.721080 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.721730 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.722889 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.727000 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.737352 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.738410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"controller-manager-7cd44f687c-8mp5h\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.746235 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.747802 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.747891 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.755241 5136 patch_prober.go:28] interesting pod/downloads-7954f5f757-djxmj container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" start-of-body= Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.755303 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-djxmj" podUID="5491b0c6-578a-430a-82db-943e9c7778e5" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.12:8080/\": dial tcp 10.217.0.12:8080: connect: connection refused" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.780948 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.781011 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.783185 5136 patch_prober.go:28] interesting pod/console-f9d7485db-bjqjp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.783241 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-bjqjp" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.816988 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.817336 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.817466 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.818144 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.919274 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.919381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.919415 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.920331 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.920611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.953469 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"redhat-operators-w76x4\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.993741 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:31 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:31 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:31 crc kubenswrapper[5136]: I0320 06:53:31.993792 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.056319 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.120784 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.122242 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.131153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.222790 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.222872 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.222982 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.255557 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.255607 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.261878 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.265379 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.265406 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.272712 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.324165 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.324270 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.324316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.325616 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.325715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.346968 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"redhat-operators-ccgmd\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.417853 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ac92b4e-38e5-4858-8b93-41afb63e9cdd" path="/var/lib/kubelet/pods/6ac92b4e-38e5-4858-8b93-41afb63e9cdd/volumes" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.478205 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.649637 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-glmlt" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.650875 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-mq9hd" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.850432 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.991328 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.993676 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:32 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:32 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:32 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:32 crc kubenswrapper[5136]: I0320 06:53:32.993986 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:33 crc kubenswrapper[5136]: I0320 06:53:33.994138 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:33 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:33 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:33 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:33 crc kubenswrapper[5136]: I0320 06:53:33.994481 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:34 crc kubenswrapper[5136]: I0320 06:53:34.993731 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:34 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:34 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:34 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:34 crc kubenswrapper[5136]: I0320 06:53:34.993785 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:35 crc kubenswrapper[5136]: I0320 06:53:35.033445 5136 ???:1] "http: TLS handshake error from 192.168.126.11:48498: no serving certificate available for the kubelet" Mar 20 06:53:35 crc kubenswrapper[5136]: I0320 06:53:35.993312 5136 patch_prober.go:28] interesting pod/router-default-5444994796-x4wkf container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 20 06:53:35 crc kubenswrapper[5136]: [-]has-synced failed: reason withheld Mar 20 06:53:35 crc kubenswrapper[5136]: [+]process-running ok Mar 20 06:53:35 crc kubenswrapper[5136]: healthz check failed Mar 20 06:53:35 crc kubenswrapper[5136]: I0320 06:53:35.993384 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-x4wkf" podUID="b4583d32-b996-4de0-a7a9-3f13086640a2" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.228047 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.229680 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.244185 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5572feb-df7d-4f3a-9b83-3be3de943668-metrics-certs\") pod \"network-metrics-daemon-jz6hg\" (UID: \"b5572feb-df7d-4f3a-9b83-3be3de943668\") " pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.318361 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.327315 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-jz6hg" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.994241 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:36 crc kubenswrapper[5136]: I0320 06:53:36.996111 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-x4wkf" Mar 20 06:53:37 crc kubenswrapper[5136]: I0320 06:53:37.136298 5136 ???:1] "http: TLS handshake error from 192.168.126.11:48504: no serving certificate available for the kubelet" Mar 20 06:53:38 crc kubenswrapper[5136]: I0320 06:53:38.000787 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-wnlnd" Mar 20 06:53:41 crc kubenswrapper[5136]: I0320 06:53:41.753095 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-djxmj" Mar 20 06:53:41 crc kubenswrapper[5136]: I0320 06:53:41.790426 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:41 crc kubenswrapper[5136]: I0320 06:53:41.796614 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 06:53:42 crc kubenswrapper[5136]: W0320 06:53:42.434094 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod301e1f09_ed9b_4d2f_ae95_c098e8ae4dd5.slice/crio-20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9 WatchSource:0}: Error finding container 20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9: Status 404 returned error can't find the container with id 20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9 Mar 20 06:53:42 crc kubenswrapper[5136]: I0320 06:53:42.716383 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerStarted","Data":"20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9"} Mar 20 06:53:42 crc kubenswrapper[5136]: I0320 06:53:42.859080 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Mar 20 06:53:45 crc kubenswrapper[5136]: I0320 06:53:45.309153 5136 ???:1] "http: TLS handshake error from 192.168.126.11:45064: no serving certificate available for the kubelet" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.630564 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.631369 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 06:53:45 crc kubenswrapper[5136]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Mar 20 06:53:45 crc kubenswrapper[5136]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-pjzr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29566492-9gbqz_openshift-infra(760c854a-7b9d-4582-9bcc-faf077008e0f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Mar 20 06:53:45 crc kubenswrapper[5136]: > logger="UnhandledError" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.633126 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" Mar 20 06:53:45 crc kubenswrapper[5136]: E0320 06:53:45.742487 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" Mar 20 06:53:45 crc kubenswrapper[5136]: I0320 06:53:45.822522 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:53:45 crc kubenswrapper[5136]: I0320 06:53:45.822583 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.186970 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.694149 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.728282 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:46 crc kubenswrapper[5136]: I0320 06:53:46.728503 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" containerID="cri-o://f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081" gracePeriod=30 Mar 20 06:53:47 crc kubenswrapper[5136]: I0320 06:53:47.749672 5136 generic.go:334] "Generic (PLEG): container finished" podID="444d6afe-1b85-4b31-92c1-06272dd19195" containerID="f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081" exitCode=0 Mar 20 06:53:47 crc kubenswrapper[5136]: I0320 06:53:47.749781 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerDied","Data":"f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081"} Mar 20 06:53:48 crc kubenswrapper[5136]: I0320 06:53:48.949518 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:53:49 crc kubenswrapper[5136]: I0320 06:53:49.918647 5136 patch_prober.go:28] interesting pod/route-controller-manager-859847c56f-k7qx9 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" start-of-body= Mar 20 06:53:49 crc kubenswrapper[5136]: I0320 06:53:49.919128 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.49:8443/healthz\": dial tcp 10.217.0.49:8443: connect: connection refused" Mar 20 06:53:51 crc kubenswrapper[5136]: I0320 06:53:51.774583 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"016ed817-0956-4149-b109-fbdbd9534b4f","Type":"ContainerStarted","Data":"63159ed80d5b5e8e1081e09634c709298d8870dc48598c6ff1bd3d48d579726f"} Mar 20 06:53:53 crc kubenswrapper[5136]: E0320 06:53:53.152877 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 20 06:53:53 crc kubenswrapper[5136]: E0320 06:53:53.153030 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qpmf4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tk985_openshift-marketplace(0ecf0c0d-35e3-402c-ac3a-60bb2686de5e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:53 crc kubenswrapper[5136]: E0320 06:53:53.154522 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tk985" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.606513 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tk985" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.689728 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.690490 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mqscr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-hjck6_openshift-marketplace(899bb83b-4a95-49e5-8e8f-50c309b5d5e1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.692776 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-hjck6" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.700095 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.700243 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bs57w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gnspw_openshift-marketplace(8a3a1d9c-1870-4a43-95fb-6d07e5619acb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.703547 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gnspw" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.719069 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.719126 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.719307 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-24rh4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-5cc6n_openshift-marketplace(b5f9659e-73fb-4389-8d6e-b739dfa94d4b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.720890 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-5cc6n" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.763347 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.763903 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.763914 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.764038 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" containerName="route-controller-manager" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.764428 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.770157 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.789521 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.789651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9" event={"ID":"444d6afe-1b85-4b31-92c1-06272dd19195","Type":"ContainerDied","Data":"a5bf747a5723dcdad0ee90a2f6a57718884edcf9a88fe8b389b16d15fd63d2ec"} Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.789687 5136 scope.go:117] "RemoveContainer" containerID="f95f9eff44362264182d5f665ca8828c4224a53df30d9a375438216e49570081" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.795549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-5cc6n" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.795622 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gnspw" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804307 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.804445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") pod \"444d6afe-1b85-4b31-92c1-06272dd19195\" (UID: \"444d6afe-1b85-4b31-92c1-06272dd19195\") " Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.805153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca" (OuterVolumeSpecName: "client-ca") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.806287 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config" (OuterVolumeSpecName: "config") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: E0320 06:53:54.816241 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-hjck6" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.823202 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.827653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8" (OuterVolumeSpecName: "kube-api-access-z6xs8") pod "444d6afe-1b85-4b31-92c1-06272dd19195" (UID: "444d6afe-1b85-4b31-92c1-06272dd19195"). InnerVolumeSpecName "kube-api-access-z6xs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905585 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905937 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.905998 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906079 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/444d6afe-1b85-4b31-92c1-06272dd19195-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906091 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906102 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6xs8\" (UniqueName: \"kubernetes.io/projected/444d6afe-1b85-4b31-92c1-06272dd19195-kube-api-access-z6xs8\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:54 crc kubenswrapper[5136]: I0320 06:53:54.906111 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/444d6afe-1b85-4b31-92c1-06272dd19195-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.006771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.006989 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.007019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.007054 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.008387 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.008603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.013304 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.021337 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"route-controller-manager-6697778f8b-jj747\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.102928 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.109070 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.123629 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b7a79e9_d592_4c23_bde5_3fa7250e3c2d.slice/crio-b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4 WatchSource:0}: Error finding container b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4: Status 404 returned error can't find the container with id b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.129156 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.133498 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-859847c56f-k7qx9"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.223921 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-jz6hg"] Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.227095 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1cf24582_9ee1_4a25_9293_b116d55e6465.slice/crio-e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f WatchSource:0}: Error finding container e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f: Status 404 returned error can't find the container with id e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.236876 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5572feb_df7d_4f3a_9b83_3be3de943668.slice/crio-04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1 WatchSource:0}: Error finding container 04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1: Status 404 returned error can't find the container with id 04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.236956 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.245553 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.247828 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.249969 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc390cc35_103e_4376_a377_789d27e92301.slice/crio-6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897 WatchSource:0}: Error finding container 6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897: Status 404 returned error can't find the container with id 6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897 Mar 20 06:53:55 crc kubenswrapper[5136]: W0320 06:53:55.253223 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff9e0ea6_add4_4087_83a6_f8d85588d6f2.slice/crio-b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894 WatchSource:0}: Error finding container b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894: Status 404 returned error can't find the container with id b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.300170 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.317624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.796274 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.796490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.796668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerStarted","Data":"b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.801629 5136 generic.go:334] "Generic (PLEG): container finished" podID="016ed817-0956-4149-b109-fbdbd9534b4f" containerID="30b021412f2df604ec6c72463282d2652e007ea16824c3bea15b401cdb18360f" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.801725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"016ed817-0956-4149-b109-fbdbd9534b4f","Type":"ContainerDied","Data":"30b021412f2df604ec6c72463282d2652e007ea16824c3bea15b401cdb18360f"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.805681 5136 generic.go:334] "Generic (PLEG): container finished" podID="c390cc35-103e-4376-a377-789d27e92301" containerID="881e29f20587338b4b26358412017a32346fc442b6e12fb3a66b43ae0eca1b2a" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.805896 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"881e29f20587338b4b26358412017a32346fc442b6e12fb3a66b43ae0eca1b2a"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.806176 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerStarted","Data":"6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.807904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" event={"ID":"b5572feb-df7d-4f3a-9b83-3be3de943668","Type":"ContainerStarted","Data":"f71c6d1d6f78cf523da112838eb4ebd8e2e0a0c9573fd81b271bc1fe978065fc"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.807950 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" event={"ID":"b5572feb-df7d-4f3a-9b83-3be3de943668","Type":"ContainerStarted","Data":"04630d157ff678919ea02ec74f06fe0c149c30be389654ed54b03feabe7962f1"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.823737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerStarted","Data":"0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.823797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerStarted","Data":"dd09161821b366f7bf1ba043da9d6862a2982103d6804207a3cd3950019ad2ae"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.823934 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.841855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerStarted","Data":"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.841897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerStarted","Data":"b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.842005 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" containerID="cri-o://c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" gracePeriod=30 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.842423 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.864157 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"258f752a-780a-4668-bfd4-6276c1a17472","Type":"ContainerStarted","Data":"70d4619eaf0c736c634a306d07c2121b8e39d2ad6f89479fd1e51118b7a87642"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.867142 5136 patch_prober.go:28] interesting pod/controller-manager-7cd44f687c-8mp5h container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:56018->10.217.0.54:8443: read: connection reset by peer" start-of-body= Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.867189 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": read tcp 10.217.0.2:56018->10.217.0.54:8443: read: connection reset by peer" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.870505 5136 generic.go:334] "Generic (PLEG): container finished" podID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.870576 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.870603 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerStarted","Data":"e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.881485 5136 generic.go:334] "Generic (PLEG): container finished" podID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" exitCode=0 Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.881526 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d"} Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.890488 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" podStartSLOduration=9.890470406 podStartE2EDuration="9.890470406s" podCreationTimestamp="2026-03-20 06:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:55.888513301 +0000 UTC m=+268.147824462" watchObservedRunningTime="2026-03-20 06:53:55.890470406 +0000 UTC m=+268.149781557" Mar 20 06:53:55 crc kubenswrapper[5136]: I0320 06:53:55.954275 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" podStartSLOduration=27.954256399 podStartE2EDuration="27.954256399s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:55.94879509 +0000 UTC m=+268.208106241" watchObservedRunningTime="2026-03-20 06:53:55.954256399 +0000 UTC m=+268.213567550" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.220306 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.292713 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.332964 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333058 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333110 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333129 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") pod \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\" (UID: \"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d\") " Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.333854 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.334146 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.334178 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config" (OuterVolumeSpecName: "config") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.338964 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.338986 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj" (OuterVolumeSpecName: "kube-api-access-9qpmj") pod "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" (UID: "8b7a79e9-d592-4c23-bde5-3fa7250e3c2d"). InnerVolumeSpecName "kube-api-access-9qpmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.403685 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444d6afe-1b85-4b31-92c1-06272dd19195" path="/var/lib/kubelet/pods/444d6afe-1b85-4b31-92c1-06272dd19195/volumes" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434929 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434953 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qpmj\" (UniqueName: \"kubernetes.io/projected/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-kube-api-access-9qpmj\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434963 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434971 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.434979 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.889226 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-jz6hg" event={"ID":"b5572feb-df7d-4f3a-9b83-3be3de943668","Type":"ContainerStarted","Data":"f373c5d179e20f6e719f303f76a58be0be6442e3e03ad793afc072949610b412"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890562 5136 generic.go:334] "Generic (PLEG): container finished" podID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" exitCode=0 Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890619 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerDied","Data":"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890746 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7cd44f687c-8mp5h" event={"ID":"8b7a79e9-d592-4c23-bde5-3fa7250e3c2d","Type":"ContainerDied","Data":"b6a47ad0b8d3083dba91f0461003306d2a38b418ea73efce3c166ddfb80853b4"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.890784 5136 scope.go:117] "RemoveContainer" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.892783 5136 generic.go:334] "Generic (PLEG): container finished" podID="258f752a-780a-4668-bfd4-6276c1a17472" containerID="ed45d1e3327dfd2e5629bd12f68d4aefb2d2d10f0e8fa7d15a26d034a1816553" exitCode=0 Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.893000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"258f752a-780a-4668-bfd4-6276c1a17472","Type":"ContainerDied","Data":"ed45d1e3327dfd2e5629bd12f68d4aefb2d2d10f0e8fa7d15a26d034a1816553"} Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.911186 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-jz6hg" podStartSLOduration=199.911170464 podStartE2EDuration="3m19.911170464s" podCreationTimestamp="2026-03-20 06:50:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:56.906103838 +0000 UTC m=+269.165414999" watchObservedRunningTime="2026-03-20 06:53:56.911170464 +0000 UTC m=+269.170481615" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.918145 5136 scope.go:117] "RemoveContainer" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" Mar 20 06:53:56 crc kubenswrapper[5136]: E0320 06:53:56.921081 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41\": container with ID starting with c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41 not found: ID does not exist" containerID="c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.921110 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41"} err="failed to get container status \"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41\": rpc error: code = NotFound desc = could not find container \"c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41\": container with ID starting with c1cecd4a549e6d8d2a2c0924e19af39751579b76effbcc593b74081f22961c41 not found: ID does not exist" Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.925852 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:56 crc kubenswrapper[5136]: I0320 06:53:56.927156 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cd44f687c-8mp5h"] Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.174400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.255760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") pod \"016ed817-0956-4149-b109-fbdbd9534b4f\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.255919 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") pod \"016ed817-0956-4149-b109-fbdbd9534b4f\" (UID: \"016ed817-0956-4149-b109-fbdbd9534b4f\") " Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.256026 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "016ed817-0956-4149-b109-fbdbd9534b4f" (UID: "016ed817-0956-4149-b109-fbdbd9534b4f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.256247 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/016ed817-0956-4149-b109-fbdbd9534b4f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.261785 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "016ed817-0956-4149-b109-fbdbd9534b4f" (UID: "016ed817-0956-4149-b109-fbdbd9534b4f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.357224 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/016ed817-0956-4149-b109-fbdbd9534b4f-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.518638 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:53:57 crc kubenswrapper[5136]: E0320 06:53:57.519260 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="016ed817-0956-4149-b109-fbdbd9534b4f" containerName="pruner" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519275 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="016ed817-0956-4149-b109-fbdbd9534b4f" containerName="pruner" Mar 20 06:53:57 crc kubenswrapper[5136]: E0320 06:53:57.519290 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519299 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519409 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="016ed817-0956-4149-b109-fbdbd9534b4f" containerName="pruner" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519428 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" containerName="controller-manager" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.519870 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.522954 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523199 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523148 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523321 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523404 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.523630 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.530670 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.531000 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661391 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661454 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661476 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.661563 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763486 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.763542 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.764476 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.765499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.767078 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.784671 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.787171 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"controller-manager-5fcfc77697-j6jd6\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.844396 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.901870 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.903308 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"016ed817-0956-4149-b109-fbdbd9534b4f","Type":"ContainerDied","Data":"63159ed80d5b5e8e1081e09634c709298d8870dc48598c6ff1bd3d48d579726f"} Mar 20 06:53:57 crc kubenswrapper[5136]: I0320 06:53:57.903359 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63159ed80d5b5e8e1081e09634c709298d8870dc48598c6ff1bd3d48d579726f" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.035564 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.112524 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.270968 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") pod \"258f752a-780a-4668-bfd4-6276c1a17472\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.271848 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") pod \"258f752a-780a-4668-bfd4-6276c1a17472\" (UID: \"258f752a-780a-4668-bfd4-6276c1a17472\") " Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.271987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "258f752a-780a-4668-bfd4-6276c1a17472" (UID: "258f752a-780a-4668-bfd4-6276c1a17472"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.274003 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/258f752a-780a-4668-bfd4-6276c1a17472-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.277850 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "258f752a-780a-4668-bfd4-6276c1a17472" (UID: "258f752a-780a-4668-bfd4-6276c1a17472"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.374726 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/258f752a-780a-4668-bfd4-6276c1a17472-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.405119 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b7a79e9-d592-4c23-bde5-3fa7250e3c2d" path="/var/lib/kubelet/pods/8b7a79e9-d592-4c23-bde5-3fa7250e3c2d/volumes" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.924540 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.924568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"258f752a-780a-4668-bfd4-6276c1a17472","Type":"ContainerDied","Data":"70d4619eaf0c736c634a306d07c2121b8e39d2ad6f89479fd1e51118b7a87642"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.924604 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70d4619eaf0c736c634a306d07c2121b8e39d2ad6f89479fd1e51118b7a87642" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.933790 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerStarted","Data":"bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.944414 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerStarted","Data":"34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.944466 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerStarted","Data":"6e7cd78c8651d266a5f2ac19a028eab78f1472c39aa6a54381d2ce52eedcf623"} Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.944898 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.956402 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:53:58 crc kubenswrapper[5136]: I0320 06:53:58.969918 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" podStartSLOduration=83.94129221 podStartE2EDuration="1m58.96990361s" podCreationTimestamp="2026-03-20 06:52:00 +0000 UTC" firstStartedPulling="2026-03-20 06:53:23.394620894 +0000 UTC m=+235.653932045" lastFinishedPulling="2026-03-20 06:53:58.423232294 +0000 UTC m=+270.682543445" observedRunningTime="2026-03-20 06:53:58.965627299 +0000 UTC m=+271.224938450" watchObservedRunningTime="2026-03-20 06:53:58.96990361 +0000 UTC m=+271.229214761" Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.000266 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" podStartSLOduration=13.000247155 podStartE2EDuration="13.000247155s" podCreationTimestamp="2026-03-20 06:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:53:58.993968179 +0000 UTC m=+271.253279330" watchObservedRunningTime="2026-03-20 06:53:59.000247155 +0000 UTC m=+271.259558306" Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.281144 5136 csr.go:261] certificate signing request csr-f4bwt is approved, waiting to be issued Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.287762 5136 csr.go:257] certificate signing request csr-f4bwt is issued Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.955102 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerDied","Data":"bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5"} Mar 20 06:53:59 crc kubenswrapper[5136]: I0320 06:53:59.955059 5136 generic.go:334] "Generic (PLEG): container finished" podID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerID="bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5" exitCode=0 Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.131632 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 06:54:00 crc kubenswrapper[5136]: E0320 06:54:00.131924 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="258f752a-780a-4668-bfd4-6276c1a17472" containerName="pruner" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.132062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="258f752a-780a-4668-bfd4-6276c1a17472" containerName="pruner" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.132257 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="258f752a-780a-4668-bfd4-6276c1a17472" containerName="pruner" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.135090 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.135395 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.139854 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.205263 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"auto-csr-approver-29566494-v7mrb\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.289649 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-23 20:07:52.702478936 +0000 UTC Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.289689 5136 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6685h13m52.412793306s for next certificate rotation Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.307211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"auto-csr-approver-29566494-v7mrb\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.337871 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"auto-csr-approver-29566494-v7mrb\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.433590 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:54:00 crc kubenswrapper[5136]: I0320 06:54:00.463205 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.461075 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.626566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") pod \"760c854a-7b9d-4582-9bcc-faf077008e0f\" (UID: \"760c854a-7b9d-4582-9bcc-faf077008e0f\") " Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.634079 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9" (OuterVolumeSpecName: "kube-api-access-pjzr9") pod "760c854a-7b9d-4582-9bcc-faf077008e0f" (UID: "760c854a-7b9d-4582-9bcc-faf077008e0f"). InnerVolumeSpecName "kube-api-access-pjzr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.729821 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjzr9\" (UniqueName: \"kubernetes.io/projected/760c854a-7b9d-4582-9bcc-faf077008e0f-kube-api-access-pjzr9\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.822787 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.965286 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerStarted","Data":"3ffc4f2f1913877d9ab0804682ad7ba8df5735f227b587b972cc9770d1884537"} Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.967080 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" event={"ID":"760c854a-7b9d-4582-9bcc-faf077008e0f","Type":"ContainerDied","Data":"e58d8ba4116f562c0c29dade18892c723e48babb4ac158f18e7e7f62c2685db2"} Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.967147 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e58d8ba4116f562c0c29dade18892c723e48babb4ac158f18e7e7f62c2685db2" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.967213 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566492-9gbqz" Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.977889 5136 generic.go:334] "Generic (PLEG): container finished" podID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" exitCode=0 Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.977979 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde"} Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.984086 5136 generic.go:334] "Generic (PLEG): container finished" podID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" exitCode=0 Mar 20 06:54:01 crc kubenswrapper[5136]: I0320 06:54:01.984110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968"} Mar 20 06:54:02 crc kubenswrapper[5136]: I0320 06:54:02.912254 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-rqppz" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.010077 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerStarted","Data":"f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.013086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerStarted","Data":"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.015374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerStarted","Data":"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.016792 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerStarted","Data":"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.017989 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerStarted","Data":"27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5"} Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.047041 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h56wl" podStartSLOduration=25.175396614 podStartE2EDuration="35.047023574s" podCreationTimestamp="2026-03-20 06:53:31 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.87594977 +0000 UTC m=+268.135260921" lastFinishedPulling="2026-03-20 06:54:05.74757673 +0000 UTC m=+278.006887881" observedRunningTime="2026-03-20 06:54:06.043093085 +0000 UTC m=+278.302404256" watchObservedRunningTime="2026-03-20 06:54:06.047023574 +0000 UTC m=+278.306334725" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.080299 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zvjw4" podStartSLOduration=26.257443486 podStartE2EDuration="36.080282755s" podCreationTimestamp="2026-03-20 06:53:30 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.884577843 +0000 UTC m=+268.143888994" lastFinishedPulling="2026-03-20 06:54:05.707417112 +0000 UTC m=+277.966728263" observedRunningTime="2026-03-20 06:54:06.079865122 +0000 UTC m=+278.339176263" watchObservedRunningTime="2026-03-20 06:54:06.080282755 +0000 UTC m=+278.339593906" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.091301 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" podStartSLOduration=2.1970830120000002 podStartE2EDuration="6.091286817s" podCreationTimestamp="2026-03-20 06:54:00 +0000 UTC" firstStartedPulling="2026-03-20 06:54:01.836257622 +0000 UTC m=+274.095568774" lastFinishedPulling="2026-03-20 06:54:05.730461428 +0000 UTC m=+277.989772579" observedRunningTime="2026-03-20 06:54:06.08984211 +0000 UTC m=+278.349153261" watchObservedRunningTime="2026-03-20 06:54:06.091286817 +0000 UTC m=+278.350597968" Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.718043 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.718634 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" containerID="cri-o://34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43" gracePeriod=30 Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.808045 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:54:06 crc kubenswrapper[5136]: I0320 06:54:06.808294 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" containerID="cri-o://0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb" gracePeriod=30 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.025179 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.025243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.027429 5136 generic.go:334] "Generic (PLEG): container finished" podID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerID="27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.027483 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerDied","Data":"27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.029066 5136 generic.go:334] "Generic (PLEG): container finished" podID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerID="0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.029132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerDied","Data":"0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.030696 5136 generic.go:334] "Generic (PLEG): container finished" podID="c390cc35-103e-4376-a377-789d27e92301" containerID="f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.030736 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.034013 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.034082 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.036437 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerID="34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43" exitCode=0 Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.036500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerDied","Data":"34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.540215 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 06:54:09 crc kubenswrapper[5136]: E0320 06:54:07.540604 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerName="oc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.540615 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerName="oc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.540730 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" containerName="oc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.541078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.543642 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.543840 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.551545 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.710084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.710121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.811151 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.811191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.811356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.834362 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.857772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.915349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.946444 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:09 crc kubenswrapper[5136]: E0320 06:54:07.947049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.947087 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.947217 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.947653 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:07.956000 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013191 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013244 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013330 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013348 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.013378 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") pod \"ba8ace97-a564-4c66-a39d-31d3d1192731\" (UID: \"ba8ace97-a564-4c66-a39d-31d3d1192731\") " Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014163 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca" (OuterVolumeSpecName: "client-ca") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014277 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config" (OuterVolumeSpecName: "config") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014552 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014602 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.014616 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ba8ace97-a564-4c66-a39d-31d3d1192731-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.016954 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.017619 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns" (OuterVolumeSpecName: "kube-api-access-kkfns") pod "ba8ace97-a564-4c66-a39d-31d3d1192731" (UID: "ba8ace97-a564-4c66-a39d-31d3d1192731"). InnerVolumeSpecName "kube-api-access-kkfns". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.043642 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.044164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" event={"ID":"ba8ace97-a564-4c66-a39d-31d3d1192731","Type":"ContainerDied","Data":"6e7cd78c8651d266a5f2ac19a028eab78f1472c39aa6a54381d2ce52eedcf623"} Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.046052 5136 scope.go:117] "RemoveContainer" containerID="34c101d578be5c8e4dd690b32bec0f18aff05b8af94343c38b5c583125107f43" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.097805 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.100956 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5fcfc77697-j6jd6"] Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116185 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116314 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116602 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116647 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116787 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkfns\" (UniqueName: \"kubernetes.io/projected/ba8ace97-a564-4c66-a39d-31d3d1192731-kube-api-access-kkfns\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.116803 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba8ace97-a564-4c66-a39d-31d3d1192731-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218402 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218442 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218495 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.218580 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.219639 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.219757 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.219799 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.223603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.235549 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"controller-manager-5bcf5ffddd-j6v5f\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.260313 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.405877 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" path="/var/lib/kubelet/pods/ba8ace97-a564-4c66-a39d-31d3d1192731/volumes" Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.844968 5136 patch_prober.go:28] interesting pod/controller-manager-5fcfc77697-j6jd6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 20 06:54:09 crc kubenswrapper[5136]: I0320 06:54:08.845261 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5fcfc77697-j6jd6" podUID="ba8ace97-a564-4c66-a39d-31d3d1192731" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.055124 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" event={"ID":"793ba114-16f6-4ad2-bc47-daee6a819a00","Type":"ContainerDied","Data":"3ffc4f2f1913877d9ab0804682ad7ba8df5735f227b587b972cc9770d1884537"} Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.055650 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ffc4f2f1913877d9ab0804682ad7ba8df5735f227b587b972cc9770d1884537" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.057480 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerStarted","Data":"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8"} Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.059237 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.059564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" event={"ID":"433e77aa-fe22-43d7-87ed-0a9219b61762","Type":"ContainerDied","Data":"dd09161821b366f7bf1ba043da9d6862a2982103d6804207a3cd3950019ad2ae"} Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.059593 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd09161821b366f7bf1ba043da9d6862a2982103d6804207a3cd3950019ad2ae" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.079921 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w76x4" podStartSLOduration=25.844895019 podStartE2EDuration="39.07990667s" podCreationTimestamp="2026-03-20 06:53:31 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.799104838 +0000 UTC m=+268.058415989" lastFinishedPulling="2026-03-20 06:54:09.034116489 +0000 UTC m=+281.293427640" observedRunningTime="2026-03-20 06:54:10.077004924 +0000 UTC m=+282.336316095" watchObservedRunningTime="2026-03-20 06:54:10.07990667 +0000 UTC m=+282.339217821" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.083108 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.194150 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:10 crc kubenswrapper[5136]: W0320 06:54:10.205799 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcada42b5_7a5d_47d5_84e7_6c5612db1132.slice/crio-3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75 WatchSource:0}: Error finding container 3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75: Status 404 returned error can't find the container with id 3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75 Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.209020 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Mar 20 06:54:10 crc kubenswrapper[5136]: W0320 06:54:10.216245 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod49b77392_c7b9_4b0b_9320_6e4fcce120d1.slice/crio-a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a WatchSource:0}: Error finding container a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a: Status 404 returned error can't find the container with id a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252410 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252457 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252512 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252537 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") pod \"433e77aa-fe22-43d7-87ed-0a9219b61762\" (UID: \"433e77aa-fe22-43d7-87ed-0a9219b61762\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.252584 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") pod \"793ba114-16f6-4ad2-bc47-daee6a819a00\" (UID: \"793ba114-16f6-4ad2-bc47-daee6a819a00\") " Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.253857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config" (OuterVolumeSpecName: "config") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.254122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca" (OuterVolumeSpecName: "client-ca") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.258250 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v" (OuterVolumeSpecName: "kube-api-access-2sf5v") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "kube-api-access-2sf5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.258522 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f" (OuterVolumeSpecName: "kube-api-access-c9d9f") pod "793ba114-16f6-4ad2-bc47-daee6a819a00" (UID: "793ba114-16f6-4ad2-bc47-daee6a819a00"). InnerVolumeSpecName "kube-api-access-c9d9f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.259374 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "433e77aa-fe22-43d7-87ed-0a9219b61762" (UID: "433e77aa-fe22-43d7-87ed-0a9219b61762"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354013 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433e77aa-fe22-43d7-87ed-0a9219b61762-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354267 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sf5v\" (UniqueName: \"kubernetes.io/projected/433e77aa-fe22-43d7-87ed-0a9219b61762-kube-api-access-2sf5v\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354421 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c9d9f\" (UniqueName: \"kubernetes.io/projected/793ba114-16f6-4ad2-bc47-daee6a819a00-kube-api-access-c9d9f\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354550 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:10 crc kubenswrapper[5136]: I0320 06:54:10.354670 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433e77aa-fe22-43d7-87ed-0a9219b61762-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.038392 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.038455 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.064962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerStarted","Data":"3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75"} Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.067295 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerStarted","Data":"2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025"} Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.068290 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566494-v7mrb" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.068333 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.068280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerStarted","Data":"a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a"} Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.089203 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ccgmd" podStartSLOduration=25.138706290000002 podStartE2EDuration="39.089187864s" podCreationTimestamp="2026-03-20 06:53:32 +0000 UTC" firstStartedPulling="2026-03-20 06:53:55.808213397 +0000 UTC m=+268.067524558" lastFinishedPulling="2026-03-20 06:54:09.758694991 +0000 UTC m=+282.018006132" observedRunningTime="2026-03-20 06:54:11.086604819 +0000 UTC m=+283.345915970" watchObservedRunningTime="2026-03-20 06:54:11.089187864 +0000 UTC m=+283.348499015" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.101495 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.107877 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6697778f8b-jj747"] Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.332885 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.400272 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.441461 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.441503 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:11 crc kubenswrapper[5136]: I0320 06:54:11.496570 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.056430 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.056698 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.074532 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerStarted","Data":"f9289243a285961376cc9a2e6f5a73de90f457c5a2f13c754d6c53831bd02f8b"} Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.076454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerStarted","Data":"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c"} Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.077582 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerStarted","Data":"489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617"} Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.078243 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.079561 5136 patch_prober.go:28] interesting pod/controller-manager-5bcf5ffddd-j6v5f container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.079605 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.094899 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=5.09487848 podStartE2EDuration="5.09487848s" podCreationTimestamp="2026-03-20 06:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:12.09092698 +0000 UTC m=+284.350238141" watchObservedRunningTime="2026-03-20 06:54:12.09487848 +0000 UTC m=+284.354189631" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.124332 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.133166 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" podStartSLOduration=6.133153715 podStartE2EDuration="6.133153715s" podCreationTimestamp="2026-03-20 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:12.132429631 +0000 UTC m=+284.391740792" watchObservedRunningTime="2026-03-20 06:54:12.133153715 +0000 UTC m=+284.392464866" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.135188 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tk985" podStartSLOduration=2.098464156 podStartE2EDuration="43.135182502s" podCreationTimestamp="2026-03-20 06:53:29 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.62037298 +0000 UTC m=+242.879684131" lastFinishedPulling="2026-03-20 06:54:11.657091326 +0000 UTC m=+283.916402477" observedRunningTime="2026-03-20 06:54:12.115376452 +0000 UTC m=+284.374687603" watchObservedRunningTime="2026-03-20 06:54:12.135182502 +0000 UTC m=+284.394493653" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.403278 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" path="/var/lib/kubelet/pods/433e77aa-fe22-43d7-87ed-0a9219b61762/volumes" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.478463 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:12 crc kubenswrapper[5136]: I0320 06:54:12.478622 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.083578 5136 generic.go:334] "Generic (PLEG): container finished" podID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.083628 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.089056 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.089151 5136 generic.go:334] "Generic (PLEG): container finished" podID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.092953 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.092989 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.094450 5136 generic.go:334] "Generic (PLEG): container finished" podID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerID="f9289243a285961376cc9a2e6f5a73de90f457c5a2f13c754d6c53831bd02f8b" exitCode=0 Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.094686 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerDied","Data":"f9289243a285961376cc9a2e6f5a73de90f457c5a2f13c754d6c53831bd02f8b"} Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.098664 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w76x4" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" probeResult="failure" output=< Mar 20 06:54:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 06:54:13 crc kubenswrapper[5136]: > Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.100210 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:13 crc kubenswrapper[5136]: I0320 06:54:13.521506 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-ccgmd" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" probeResult="failure" output=< Mar 20 06:54:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 06:54:13 crc kubenswrapper[5136]: > Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.102076 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerStarted","Data":"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780"} Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.104515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerStarted","Data":"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1"} Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.107394 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerStarted","Data":"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae"} Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.119593 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gnspw" podStartSLOduration=3.126380684 podStartE2EDuration="46.11958471s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.5870744 +0000 UTC m=+242.846385551" lastFinishedPulling="2026-03-20 06:54:13.580278426 +0000 UTC m=+285.839589577" observedRunningTime="2026-03-20 06:54:14.119050054 +0000 UTC m=+286.378361205" watchObservedRunningTime="2026-03-20 06:54:14.11958471 +0000 UTC m=+286.378895861" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.169302 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hjck6" podStartSLOduration=3.2653753070000002 podStartE2EDuration="46.169289312s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.593663138 +0000 UTC m=+242.852974289" lastFinishedPulling="2026-03-20 06:54:13.497577143 +0000 UTC m=+285.756888294" observedRunningTime="2026-03-20 06:54:14.167574505 +0000 UTC m=+286.426885656" watchObservedRunningTime="2026-03-20 06:54:14.169289312 +0000 UTC m=+286.428600463" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.171099 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5cc6n" podStartSLOduration=3.2146032780000002 podStartE2EDuration="46.171092301s" podCreationTimestamp="2026-03-20 06:53:28 +0000 UTC" firstStartedPulling="2026-03-20 06:53:30.612281845 +0000 UTC m=+242.871592996" lastFinishedPulling="2026-03-20 06:54:13.568770868 +0000 UTC m=+285.828082019" observedRunningTime="2026-03-20 06:54:14.148920303 +0000 UTC m=+286.408231444" watchObservedRunningTime="2026-03-20 06:54:14.171092301 +0000 UTC m=+286.430403452" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.414026 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532026 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:14 crc kubenswrapper[5136]: E0320 06:54:14.532223 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerName="oc" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532235 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerName="oc" Mar 20 06:54:14 crc kubenswrapper[5136]: E0320 06:54:14.532246 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerName="pruner" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532253 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerName="pruner" Mar 20 06:54:14 crc kubenswrapper[5136]: E0320 06:54:14.532264 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532272 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532367 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="433e77aa-fe22-43d7-87ed-0a9219b61762" containerName="route-controller-manager" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532378 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" containerName="oc" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532385 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b77392-c7b9-4b0b-9320-6e4fcce120d1" containerName="pruner" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.532726 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.535010 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.535218 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.535334 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.536148 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.536275 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.536296 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.548658 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608484 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") pod \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608543 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") pod \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\" (UID: \"49b77392-c7b9-4b0b-9320-6e4fcce120d1\") " Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608726 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "49b77392-c7b9-4b0b-9320-6e4fcce120d1" (UID: "49b77392-c7b9-4b0b-9320-6e4fcce120d1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.608922 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.615891 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "49b77392-c7b9-4b0b-9320-6e4fcce120d1" (UID: "49b77392-c7b9-4b0b-9320-6e4fcce120d1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709668 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709744 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.709784 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/49b77392-c7b9-4b0b-9320-6e4fcce120d1-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810745 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.810808 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.811721 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.812025 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.821727 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.829098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"route-controller-manager-7d44b75469-h75kl\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:14 crc kubenswrapper[5136]: I0320 06:54:14.843691 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.118682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"49b77392-c7b9-4b0b-9320-6e4fcce120d1","Type":"ContainerDied","Data":"a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a"} Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.118925 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4cf8bd8dbafa2ac753c600b98a02c2b1d9b112327de74ddfed7d07642b73e0a" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.118792 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.179105 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:15 crc kubenswrapper[5136]: W0320 06:54:15.189901 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a3b5dcf_bcd1_4502_88f8_50c39af7e940.slice/crio-6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51 WatchSource:0}: Error finding container 6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51: Status 404 returned error can't find the container with id 6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51 Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.821787 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.821898 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.825848 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:54:15 crc kubenswrapper[5136]: I0320 06:54:15.826093 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h56wl" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" containerID="cri-o://0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" gracePeriod=2 Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.125391 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerStarted","Data":"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f"} Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.125466 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerStarted","Data":"6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51"} Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.537013 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.542590 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.544339 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.545612 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.561744 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.737620 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.737670 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.737724 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838847 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838963 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.838979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.839049 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.857642 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"installer-9-crc\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.901657 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:16 crc kubenswrapper[5136]: I0320 06:54:16.920373 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.041404 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") pod \"1cf24582-9ee1-4a25-9293-b116d55e6465\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.041793 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") pod \"1cf24582-9ee1-4a25-9293-b116d55e6465\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.041873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") pod \"1cf24582-9ee1-4a25-9293-b116d55e6465\" (UID: \"1cf24582-9ee1-4a25-9293-b116d55e6465\") " Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.043407 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities" (OuterVolumeSpecName: "utilities") pod "1cf24582-9ee1-4a25-9293-b116d55e6465" (UID: "1cf24582-9ee1-4a25-9293-b116d55e6465"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.046227 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j" (OuterVolumeSpecName: "kube-api-access-xph6j") pod "1cf24582-9ee1-4a25-9293-b116d55e6465" (UID: "1cf24582-9ee1-4a25-9293-b116d55e6465"). InnerVolumeSpecName "kube-api-access-xph6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.085708 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cf24582-9ee1-4a25-9293-b116d55e6465" (UID: "1cf24582-9ee1-4a25-9293-b116d55e6465"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134748 5136 generic.go:334] "Generic (PLEG): container finished" podID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" exitCode=0 Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc"} Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134917 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h56wl" event={"ID":"1cf24582-9ee1-4a25-9293-b116d55e6465","Type":"ContainerDied","Data":"e97be04ec45ad4ff430feb5809767da522725df9a1e53411e7387d687914904f"} Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134936 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134954 5136 scope.go:117] "RemoveContainer" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.134880 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h56wl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.143762 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.143790 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xph6j\" (UniqueName: \"kubernetes.io/projected/1cf24582-9ee1-4a25-9293-b116d55e6465-kube-api-access-xph6j\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.143802 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cf24582-9ee1-4a25-9293-b116d55e6465-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.146691 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.150514 5136 scope.go:117] "RemoveContainer" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.157543 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" podStartSLOduration=11.157525504 podStartE2EDuration="11.157525504s" podCreationTimestamp="2026-03-20 06:54:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:17.152377235 +0000 UTC m=+289.411688396" watchObservedRunningTime="2026-03-20 06:54:17.157525504 +0000 UTC m=+289.416836655" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.183830 5136 scope.go:117] "RemoveContainer" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.197342 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.197392 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h56wl"] Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.203939 5136 scope.go:117] "RemoveContainer" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" Mar 20 06:54:17 crc kubenswrapper[5136]: E0320 06:54:17.204483 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc\": container with ID starting with 0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc not found: ID does not exist" containerID="0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.204549 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc"} err="failed to get container status \"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc\": rpc error: code = NotFound desc = could not find container \"0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc\": container with ID starting with 0b9ca4c6a87080715f33e8b6d643b96e63f200c35c3176f72c095f9b86b6f8cc not found: ID does not exist" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.204598 5136 scope.go:117] "RemoveContainer" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" Mar 20 06:54:17 crc kubenswrapper[5136]: E0320 06:54:17.205011 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde\": container with ID starting with 4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde not found: ID does not exist" containerID="4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.205040 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde"} err="failed to get container status \"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde\": rpc error: code = NotFound desc = could not find container \"4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde\": container with ID starting with 4f78c42a7893dc826244650e8886950ba1351aaa09f97ac65c5ac0fadfd1ccde not found: ID does not exist" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.205061 5136 scope.go:117] "RemoveContainer" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" Mar 20 06:54:17 crc kubenswrapper[5136]: E0320 06:54:17.205459 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50\": container with ID starting with c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50 not found: ID does not exist" containerID="c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.205507 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50"} err="failed to get container status \"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50\": rpc error: code = NotFound desc = could not find container \"c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50\": container with ID starting with c1dbd216168f7d6e0c9e9a66d4d5c116d77cc9934078fad5ae9908f479d91b50 not found: ID does not exist" Mar 20 06:54:17 crc kubenswrapper[5136]: I0320 06:54:17.303722 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Mar 20 06:54:17 crc kubenswrapper[5136]: W0320 06:54:17.310505 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod84671130_5991_4032_964a_01c61fefc56a.slice/crio-a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0 WatchSource:0}: Error finding container a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0: Status 404 returned error can't find the container with id a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0 Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.142433 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerStarted","Data":"41fd895caa40c1a2b0a6165ebaa7bf7c883b138febb47e401574b2b95cc9077c"} Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.142835 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerStarted","Data":"a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0"} Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.410441 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" path="/var/lib/kubelet/pods/1cf24582-9ee1-4a25-9293-b116d55e6465/volumes" Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.959434 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:18 crc kubenswrapper[5136]: I0320 06:54:18.959498 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.025530 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.052452 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.052539 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.101663 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.173236 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.173209477 podStartE2EDuration="3.173209477s" podCreationTimestamp="2026-03-20 06:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:19.16538419 +0000 UTC m=+291.424695351" watchObservedRunningTime="2026-03-20 06:54:19.173209477 +0000 UTC m=+291.432520648" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.202804 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.213707 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.320617 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.321024 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.356940 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.466293 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.466361 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:19 crc kubenswrapper[5136]: I0320 06:54:19.508602 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:20 crc kubenswrapper[5136]: I0320 06:54:20.201570 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:20 crc kubenswrapper[5136]: I0320 06:54:20.204993 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.127891 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.183936 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.229765 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.230063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tk985" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" containerID="cri-o://633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" gracePeriod=2 Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.428406 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.429030 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5cc6n" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" containerID="cri-o://96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" gracePeriod=2 Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.532866 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.587470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.734809 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825647 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") pod \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825722 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") pod \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825734 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.825806 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") pod \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\" (UID: \"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.826628 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities" (OuterVolumeSpecName: "utilities") pod "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" (UID: "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.830980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4" (OuterVolumeSpecName: "kube-api-access-qpmf4") pod "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" (UID: "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e"). InnerVolumeSpecName "kube-api-access-qpmf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.878076 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" (UID: "0ecf0c0d-35e3-402c-ac3a-60bb2686de5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") pod \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") pod \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927650 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") pod \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\" (UID: \"b5f9659e-73fb-4389-8d6e-b739dfa94d4b\") " Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927915 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927934 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.927944 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpmf4\" (UniqueName: \"kubernetes.io/projected/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e-kube-api-access-qpmf4\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.929407 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities" (OuterVolumeSpecName: "utilities") pod "b5f9659e-73fb-4389-8d6e-b739dfa94d4b" (UID: "b5f9659e-73fb-4389-8d6e-b739dfa94d4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.930417 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4" (OuterVolumeSpecName: "kube-api-access-24rh4") pod "b5f9659e-73fb-4389-8d6e-b739dfa94d4b" (UID: "b5f9659e-73fb-4389-8d6e-b739dfa94d4b"). InnerVolumeSpecName "kube-api-access-24rh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:22 crc kubenswrapper[5136]: I0320 06:54:22.978038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b5f9659e-73fb-4389-8d6e-b739dfa94d4b" (UID: "b5f9659e-73fb-4389-8d6e-b739dfa94d4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.028982 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.029191 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.029261 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24rh4\" (UniqueName: \"kubernetes.io/projected/b5f9659e-73fb-4389-8d6e-b739dfa94d4b-kube-api-access-24rh4\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.179259 5136 generic.go:334] "Generic (PLEG): container finished" podID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" exitCode=0 Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.179294 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cc6n" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.179325 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.180868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cc6n" event={"ID":"b5f9659e-73fb-4389-8d6e-b739dfa94d4b","Type":"ContainerDied","Data":"e01d0fc1f5e7b135553c58dae30642386c72820cdb0d76abc481f734a7517ae7"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.180896 5136 scope.go:117] "RemoveContainer" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186600 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" exitCode=0 Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186645 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tk985" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186727 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.186759 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tk985" event={"ID":"0ecf0c0d-35e3-402c-ac3a-60bb2686de5e","Type":"ContainerDied","Data":"0fb7591931908d54cba79b40ec6231538415c3ba45eb54605b5ec4b5dd387ac9"} Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.207964 5136 scope.go:117] "RemoveContainer" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.234873 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.245017 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5cc6n"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.246209 5136 scope.go:117] "RemoveContainer" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.249133 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.253015 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tk985"] Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.262738 5136 scope.go:117] "RemoveContainer" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.263015 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae\": container with ID starting with 96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae not found: ID does not exist" containerID="96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263043 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae"} err="failed to get container status \"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae\": rpc error: code = NotFound desc = could not find container \"96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae\": container with ID starting with 96cab511116a50bfd5e8d065be228204ad81aec9753a6f27381de196294009ae not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263061 5136 scope.go:117] "RemoveContainer" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.263402 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91\": container with ID starting with 6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91 not found: ID does not exist" containerID="6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263435 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91"} err="failed to get container status \"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91\": rpc error: code = NotFound desc = could not find container \"6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91\": container with ID starting with 6781c7e9d7eb5e0eb62174fa991e0172bb9a392f4a1873a270da9a5f802faa91 not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263452 5136 scope.go:117] "RemoveContainer" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.263681 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df\": container with ID starting with cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df not found: ID does not exist" containerID="cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263704 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df"} err="failed to get container status \"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df\": rpc error: code = NotFound desc = could not find container \"cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df\": container with ID starting with cc9956c9a552b139b5e5774154b7b9855f883623477aaf77275e4bbc83c939df not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.263720 5136 scope.go:117] "RemoveContainer" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.277675 5136 scope.go:117] "RemoveContainer" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.296460 5136 scope.go:117] "RemoveContainer" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.312223 5136 scope.go:117] "RemoveContainer" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.312613 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c\": container with ID starting with 633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c not found: ID does not exist" containerID="633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.312658 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c"} err="failed to get container status \"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c\": rpc error: code = NotFound desc = could not find container \"633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c\": container with ID starting with 633b7423fd9f273ea3d2aedb0b1fdd8feecc5709213ba31185f081ae3bd70b0c not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.312685 5136 scope.go:117] "RemoveContainer" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.313103 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c\": container with ID starting with 0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c not found: ID does not exist" containerID="0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.313149 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c"} err="failed to get container status \"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c\": rpc error: code = NotFound desc = could not find container \"0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c\": container with ID starting with 0a3a9f250b57d97a9f680ce6f5e879f1dd2181314e08b745f63267fbb724073c not found: ID does not exist" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.313177 5136 scope.go:117] "RemoveContainer" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" Mar 20 06:54:23 crc kubenswrapper[5136]: E0320 06:54:23.313471 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3\": container with ID starting with 9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3 not found: ID does not exist" containerID="9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3" Mar 20 06:54:23 crc kubenswrapper[5136]: I0320 06:54:23.313496 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3"} err="failed to get container status \"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3\": rpc error: code = NotFound desc = could not find container \"9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3\": container with ID starting with 9ca3fbec27544afa777bf540af670781f133c1d65deee4b039aa27a845d2bac3 not found: ID does not exist" Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.417375 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" path="/var/lib/kubelet/pods/0ecf0c0d-35e3-402c-ac3a-60bb2686de5e/volumes" Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.419733 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" path="/var/lib/kubelet/pods/b5f9659e-73fb-4389-8d6e-b739dfa94d4b/volumes" Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.832965 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:54:24 crc kubenswrapper[5136]: I0320 06:54:24.833308 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ccgmd" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" containerID="cri-o://2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025" gracePeriod=2 Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.207956 5136 generic.go:334] "Generic (PLEG): container finished" podID="c390cc35-103e-4376-a377-789d27e92301" containerID="2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025" exitCode=0 Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.208025 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025"} Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.461169 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" containerID="cri-o://123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" gracePeriod=15 Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.968480 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:25 crc kubenswrapper[5136]: I0320 06:54:25.972424 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074581 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") pod \"c390cc35-103e-4376-a377-789d27e92301\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.074693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075739 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities" (OuterVolumeSpecName: "utilities") pod "c390cc35-103e-4376-a377-789d27e92301" (UID: "c390cc35-103e-4376-a377-789d27e92301"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075793 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075882 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075934 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.075958 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076329 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076369 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076381 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076405 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") pod \"c390cc35-103e-4376-a377-789d27e92301\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076460 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.076498 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077099 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077473 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") pod \"1a566282-9a27-4172-b5ba-574e0179cfc4\" (UID: \"1a566282-9a27-4172-b5ba-574e0179cfc4\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.077519 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") pod \"c390cc35-103e-4376-a377-789d27e92301\" (UID: \"c390cc35-103e-4376-a377-789d27e92301\") " Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078336 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078571 5136 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-policies\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078587 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078597 5136 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/1a566282-9a27-4172-b5ba-574e0179cfc4-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078606 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078617 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.078874 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.081601 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.082064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs" (OuterVolumeSpecName: "kube-api-access-x8zcs") pod "c390cc35-103e-4376-a377-789d27e92301" (UID: "c390cc35-103e-4376-a377-789d27e92301"). InnerVolumeSpecName "kube-api-access-x8zcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.082518 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd" (OuterVolumeSpecName: "kube-api-access-zwfcd") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "kube-api-access-zwfcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083173 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.082697 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083004 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083434 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083567 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.083719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.084542 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "1a566282-9a27-4172-b5ba-574e0179cfc4" (UID: "1a566282-9a27-4172-b5ba-574e0179cfc4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179693 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179742 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179757 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwfcd\" (UniqueName: \"kubernetes.io/projected/1a566282-9a27-4172-b5ba-574e0179cfc4-kube-api-access-zwfcd\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179769 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179782 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179796 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179831 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179847 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179861 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179873 5136 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/1a566282-9a27-4172-b5ba-574e0179cfc4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.179885 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8zcs\" (UniqueName: \"kubernetes.io/projected/c390cc35-103e-4376-a377-789d27e92301-kube-api-access-x8zcs\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.217256 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ccgmd" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.217175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ccgmd" event={"ID":"c390cc35-103e-4376-a377-789d27e92301","Type":"ContainerDied","Data":"6945d5a08a47f91726889cbb1087073d5ea5c7ff5da1680cfa09cb30c8ba3897"} Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.217424 5136 scope.go:117] "RemoveContainer" containerID="2d4dba2ff1549dc6f8aaa33906c467cbc9d1690e7ac80aacdc997a2f55cc4025" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222350 5136 generic.go:334] "Generic (PLEG): container finished" podID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" exitCode=0 Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerDied","Data":"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857"} Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222415 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.222420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-6s42p" event={"ID":"1a566282-9a27-4172-b5ba-574e0179cfc4","Type":"ContainerDied","Data":"df39f87d48bdc4108cfbbd23c050e3dcecc77d5d9cf9eff9e81e1a0106f177c3"} Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.224790 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c390cc35-103e-4376-a377-789d27e92301" (UID: "c390cc35-103e-4376-a377-789d27e92301"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.245884 5136 scope.go:117] "RemoveContainer" containerID="f46b7933668706885051cf2336daf80ae5d49b441259e450e3a49c583c6aa84a" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.268758 5136 scope.go:117] "RemoveContainer" containerID="881e29f20587338b4b26358412017a32346fc442b6e12fb3a66b43ae0eca1b2a" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.271902 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.277582 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-6s42p"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.280799 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c390cc35-103e-4376-a377-789d27e92301-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.291954 5136 scope.go:117] "RemoveContainer" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.313685 5136 scope.go:117] "RemoveContainer" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" Mar 20 06:54:26 crc kubenswrapper[5136]: E0320 06:54:26.314391 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857\": container with ID starting with 123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857 not found: ID does not exist" containerID="123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.314426 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857"} err="failed to get container status \"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857\": rpc error: code = NotFound desc = could not find container \"123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857\": container with ID starting with 123a528b42dbbfbb9f33379a6ed4b98d0ff49520ca7c349008a22be8c0d90857 not found: ID does not exist" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.404295 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" path="/var/lib/kubelet/pods/1a566282-9a27-4172-b5ba-574e0179cfc4/volumes" Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.543794 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.548211 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ccgmd"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.696489 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.696922 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" containerID="cri-o://489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617" gracePeriod=30 Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.717793 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:26 crc kubenswrapper[5136]: I0320 06:54:26.718097 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" containerID="cri-o://73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" gracePeriod=30 Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.224042 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.228055 5136 generic.go:334] "Generic (PLEG): container finished" podID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerID="489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617" exitCode=0 Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.228092 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerDied","Data":"489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617"} Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231418 5136 generic.go:334] "Generic (PLEG): container finished" podID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" exitCode=0 Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231455 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231461 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerDied","Data":"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f"} Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl" event={"ID":"6a3b5dcf-bcd1-4502-88f8-50c39af7e940","Type":"ContainerDied","Data":"6a315e78b7f2d9cc8f4d33cb5e467ab336b28cd1fd4260458044c4145ecd3b51"} Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.231535 5136 scope.go:117] "RemoveContainer" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.246122 5136 scope.go:117] "RemoveContainer" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" Mar 20 06:54:27 crc kubenswrapper[5136]: E0320 06:54:27.246558 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f\": container with ID starting with 73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f not found: ID does not exist" containerID="73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.246638 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f"} err="failed to get container status \"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f\": rpc error: code = NotFound desc = could not find container \"73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f\": container with ID starting with 73e4750c723efe3dac2d23514f593d6e462dc0701ba1d4ac5a82de8cfb10a93f not found: ID does not exist" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298227 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298390 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.298416 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") pod \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\" (UID: \"6a3b5dcf-bcd1-4502-88f8-50c39af7e940\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.301639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca" (OuterVolumeSpecName: "client-ca") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.309568 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config" (OuterVolumeSpecName: "config") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.310012 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.310032 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.311931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv" (OuterVolumeSpecName: "kube-api-access-759tv") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "kube-api-access-759tv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.315744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.316586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6a3b5dcf-bcd1-4502-88f8-50c39af7e940" (UID: "6a3b5dcf-bcd1-4502-88f8-50c39af7e940"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411120 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411474 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411496 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") pod \"cada42b5-7a5d-47d5-84e7-6c5612db1132\" (UID: \"cada42b5-7a5d-47d5-84e7-6c5612db1132\") " Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411730 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-759tv\" (UniqueName: \"kubernetes.io/projected/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-kube-api-access-759tv\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.411742 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a3b5dcf-bcd1-4502-88f8-50c39af7e940-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.412597 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.412631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config" (OuterVolumeSpecName: "config") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.412764 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca" (OuterVolumeSpecName: "client-ca") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.414582 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5" (OuterVolumeSpecName: "kube-api-access-kqzq5") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "kube-api-access-kqzq5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.415338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cada42b5-7a5d-47d5-84e7-6c5612db1132" (UID: "cada42b5-7a5d-47d5-84e7-6c5612db1132"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512877 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cada42b5-7a5d-47d5-84e7-6c5612db1132-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512918 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512934 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512947 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqzq5\" (UniqueName: \"kubernetes.io/projected/cada42b5-7a5d-47d5-84e7-6c5612db1132-kube-api-access-kqzq5\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.512959 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/cada42b5-7a5d-47d5-84e7-6c5612db1132-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.560354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:27 crc kubenswrapper[5136]: I0320 06:54:27.562937 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d44b75469-h75kl"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.236773 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" event={"ID":"cada42b5-7a5d-47d5-84e7-6c5612db1132","Type":"ContainerDied","Data":"3d3b4204912003a659f2d723f3606e1a424a5ae09bdc6ee3049fa1cfb5307c75"} Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.236827 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.236859 5136 scope.go:117] "RemoveContainer" containerID="489bdb392a45bad16e501fa0a0fb2098e4fc47999ee7306501643766f25cd617" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.261504 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.263724 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5bcf5ffddd-j6v5f"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.406037 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" path="/var/lib/kubelet/pods/6a3b5dcf-bcd1-4502-88f8-50c39af7e940/volumes" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.406510 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c390cc35-103e-4376-a377-789d27e92301" path="/var/lib/kubelet/pods/c390cc35-103e-4376-a377-789d27e92301/volumes" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.407267 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" path="/var/lib/kubelet/pods/cada42b5-7a5d-47d5-84e7-6c5612db1132/volumes" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.542812 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543034 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543045 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543097 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543104 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543112 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543120 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543128 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543134 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543145 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543151 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543159 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543165 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543173 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543178 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543186 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543191 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543199 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543204 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543218 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543225 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543235 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543241 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543247 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543252 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543261 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543268 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="extract-utilities" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543280 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: E0320 06:54:28.543290 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543295 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="extract-content" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543378 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f9659e-73fb-4389-8d6e-b739dfa94d4b" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543387 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cada42b5-7a5d-47d5-84e7-6c5612db1132" containerName="controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543395 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecf0c0d-35e3-402c-ac3a-60bb2686de5e" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543405 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a566282-9a27-4172-b5ba-574e0179cfc4" containerName="oauth-openshift" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543412 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c390cc35-103e-4376-a377-789d27e92301" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543419 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a3b5dcf-bcd1-4502-88f8-50c39af7e940" containerName="route-controller-manager" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543427 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cf24582-9ee1-4a25-9293-b116d55e6465" containerName="registry-server" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.543779 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.545653 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.547634 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.548307 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.552964 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.553087 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.553394 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554158 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554270 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554659 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554692 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.554913 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.555026 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.555108 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.555307 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.573587 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.580000 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.590425 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.626949 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.626998 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627022 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627045 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627154 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627178 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.627219 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.727990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728067 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728108 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728206 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728274 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.728389 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.729464 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.729524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.729690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.730424 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.731516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.734455 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.736349 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.743275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"route-controller-manager-55d77fd856-4bsxd\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.743478 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"controller-manager-588cbf568d-gdtth\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.871962 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:28 crc kubenswrapper[5136]: I0320 06:54:28.895174 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.124961 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:29 crc kubenswrapper[5136]: W0320 06:54:29.131944 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod30eefe16_e27d_48ba_8ddc_6323d5ef7dff.slice/crio-3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d WatchSource:0}: Error finding container 3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d: Status 404 returned error can't find the container with id 3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.243790 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerStarted","Data":"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7"} Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.243842 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerStarted","Data":"3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d"} Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.244598 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.246020 5136 patch_prober.go:28] interesting pod/route-controller-manager-55d77fd856-4bsxd container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.246054 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.264098 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" podStartSLOduration=3.264078601 podStartE2EDuration="3.264078601s" podCreationTimestamp="2026-03-20 06:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:29.259063627 +0000 UTC m=+301.518374778" watchObservedRunningTime="2026-03-20 06:54:29.264078601 +0000 UTC m=+301.523389762" Mar 20 06:54:29 crc kubenswrapper[5136]: I0320 06:54:29.289747 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:29 crc kubenswrapper[5136]: W0320 06:54:29.293615 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4e3e9f4_ed5f_49be_b08b_7d1c98d815e6.slice/crio-3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde WatchSource:0}: Error finding container 3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde: Status 404 returned error can't find the container with id 3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.254099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerStarted","Data":"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f"} Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.254562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerStarted","Data":"3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde"} Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.259261 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.280176 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" podStartSLOduration=4.280146897 podStartE2EDuration="4.280146897s" podCreationTimestamp="2026-03-20 06:54:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:30.274921786 +0000 UTC m=+302.534232997" watchObservedRunningTime="2026-03-20 06:54:30.280146897 +0000 UTC m=+302.539458078" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.547104 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-8449b79ffb-pfnv9"] Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.548931 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.560746 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.563198 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.563564 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.563222 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.564114 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.565081 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.565207 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.565424 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.566312 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.568044 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.568559 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.568598 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.576584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.590201 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.598273 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8449b79ffb-pfnv9"] Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.600532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651408 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdv9n\" (UniqueName: \"kubernetes.io/projected/57f31029-60e4-4bcb-a75a-c88030d19563-kube-api-access-gdv9n\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651440 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-error\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651485 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651506 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651591 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-login\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651773 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-audit-policies\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.651966 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-session\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652010 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-router-certs\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652060 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57f31029-60e4-4bcb-a75a-c88030d19563-audit-dir\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652186 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-service-ca\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.652225 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754369 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdv9n\" (UniqueName: \"kubernetes.io/projected/57f31029-60e4-4bcb-a75a-c88030d19563-kube-api-access-gdv9n\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754534 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-error\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754621 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754710 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.754802 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-login\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755002 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755093 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-audit-policies\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755250 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-session\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-router-certs\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755443 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57f31029-60e4-4bcb-a75a-c88030d19563-audit-dir\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.755653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57f31029-60e4-4bcb-a75a-c88030d19563-audit-dir\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.756039 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-audit-policies\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.756867 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-cliconfig\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-service-ca\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757139 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757229 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.757297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.759187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-service-ca\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.761567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-router-certs\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.761915 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762431 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-error\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762456 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-serving-cert\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762508 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-system-session\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-template-login\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.762729 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/57f31029-60e4-4bcb-a75a-c88030d19563-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.773696 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdv9n\" (UniqueName: \"kubernetes.io/projected/57f31029-60e4-4bcb-a75a-c88030d19563-kube-api-access-gdv9n\") pod \"oauth-openshift-8449b79ffb-pfnv9\" (UID: \"57f31029-60e4-4bcb-a75a-c88030d19563\") " pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:30 crc kubenswrapper[5136]: I0320 06:54:30.891846 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:31 crc kubenswrapper[5136]: I0320 06:54:31.264394 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:31 crc kubenswrapper[5136]: I0320 06:54:31.275205 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:31 crc kubenswrapper[5136]: I0320 06:54:31.325882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-8449b79ffb-pfnv9"] Mar 20 06:54:32 crc kubenswrapper[5136]: I0320 06:54:32.291999 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" event={"ID":"57f31029-60e4-4bcb-a75a-c88030d19563","Type":"ContainerStarted","Data":"0f40371f46b1b2ef90f7cc703e304d506b3ee56f33f2580a78a85415e4e90a6d"} Mar 20 06:54:32 crc kubenswrapper[5136]: I0320 06:54:32.292599 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" event={"ID":"57f31029-60e4-4bcb-a75a-c88030d19563","Type":"ContainerStarted","Data":"de1932609b7e537e1144511b2a0d2be95af97dfc2105c7beaa4aef1194ea5606"} Mar 20 06:54:32 crc kubenswrapper[5136]: I0320 06:54:32.318257 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" podStartSLOduration=32.318235305 podStartE2EDuration="32.318235305s" podCreationTimestamp="2026-03-20 06:54:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:32.317305914 +0000 UTC m=+304.576617075" watchObservedRunningTime="2026-03-20 06:54:32.318235305 +0000 UTC m=+304.577546466" Mar 20 06:54:33 crc kubenswrapper[5136]: I0320 06:54:33.295956 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:33 crc kubenswrapper[5136]: I0320 06:54:33.301593 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-8449b79ffb-pfnv9" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.822148 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.822716 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.822767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.823461 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 06:54:45 crc kubenswrapper[5136]: I0320 06:54:45.823525 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9" gracePeriod=600 Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.364986 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9" exitCode=0 Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.365094 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9"} Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.365374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13"} Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.720399 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.720629 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" containerID="cri-o://9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" gracePeriod=30 Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.809343 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:46 crc kubenswrapper[5136]: I0320 06:54:46.809540 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" containerID="cri-o://247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" gracePeriod=30 Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.271519 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.275590 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374442 5136 generic.go:334] "Generic (PLEG): container finished" podID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" exitCode=0 Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374559 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374892 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerDied","Data":"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.374970 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd" event={"ID":"30eefe16-e27d-48ba-8ddc-6323d5ef7dff","Type":"ContainerDied","Data":"3af454d0d61bd02a5123db5d460f020a2c15ffaa85dc6d97ea18da27415da35d"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.375003 5136 scope.go:117] "RemoveContainer" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377220 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" exitCode=0 Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377258 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerDied","Data":"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.377296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-588cbf568d-gdtth" event={"ID":"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6","Type":"ContainerDied","Data":"3f1c88baaf30595bdea8d4726c5ba10c496fd749769105c29ab6885acc9bedde"} Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.392668 5136 scope.go:117] "RemoveContainer" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" Mar 20 06:54:47 crc kubenswrapper[5136]: E0320 06:54:47.393092 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7\": container with ID starting with 247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7 not found: ID does not exist" containerID="247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.393125 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7"} err="failed to get container status \"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7\": rpc error: code = NotFound desc = could not find container \"247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7\": container with ID starting with 247895d69dd7a770faa41409cbbb9fc23dacb27b7df4bfb3165bbe0b5764b9b7 not found: ID does not exist" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.393149 5136 scope.go:117] "RemoveContainer" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.406681 5136 scope.go:117] "RemoveContainer" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" Mar 20 06:54:47 crc kubenswrapper[5136]: E0320 06:54:47.407234 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f\": container with ID starting with 9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f not found: ID does not exist" containerID="9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.407279 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f"} err="failed to get container status \"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f\": rpc error: code = NotFound desc = could not find container \"9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f\": container with ID starting with 9d92d33e89d7823c6e22b13fce888286155c297935c26ab8ad0784bbe6a3947f not found: ID does not exist" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422615 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422662 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422685 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422705 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422732 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422756 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422773 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422803 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") pod \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\" (UID: \"30eefe16-e27d-48ba-8ddc-6323d5ef7dff\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.422877 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") pod \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\" (UID: \"d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6\") " Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423657 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423670 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca" (OuterVolumeSpecName: "client-ca") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config" (OuterVolumeSpecName: "config") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.423851 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config" (OuterVolumeSpecName: "config") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.424122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca" (OuterVolumeSpecName: "client-ca") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.428554 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.429209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4" (OuterVolumeSpecName: "kube-api-access-kk6g4") pod "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" (UID: "d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6"). InnerVolumeSpecName "kube-api-access-kk6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.434573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl" (OuterVolumeSpecName: "kube-api-access-jdbfl") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "kube-api-access-jdbfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.435134 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "30eefe16-e27d-48ba-8ddc-6323d5ef7dff" (UID: "30eefe16-e27d-48ba-8ddc-6323d5ef7dff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524268 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524298 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524307 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524316 5136 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524325 5136 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-client-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524333 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-config\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524341 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdbfl\" (UniqueName: \"kubernetes.io/projected/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-kube-api-access-jdbfl\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524349 5136 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30eefe16-e27d-48ba-8ddc-6323d5ef7dff-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.524392 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kk6g4\" (UniqueName: \"kubernetes.io/projected/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6-kube-api-access-kk6g4\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.706565 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.711174 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-588cbf568d-gdtth"] Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.717964 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:47 crc kubenswrapper[5136]: I0320 06:54:47.720886 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-55d77fd856-4bsxd"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.403974 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" path="/var/lib/kubelet/pods/30eefe16-e27d-48ba-8ddc-6323d5ef7dff/volumes" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.405049 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" path="/var/lib/kubelet/pods/d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6/volumes" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557533 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl"] Mar 20 06:54:48 crc kubenswrapper[5136]: E0320 06:54:48.557795 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557807 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: E0320 06:54:48.557831 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557836 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557933 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e3e9f4-ed5f-49be-b08b-7d1c98d815e6" containerName="controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.557946 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="30eefe16-e27d-48ba-8ddc-6323d5ef7dff" containerName="route-controller-manager" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.558330 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.560229 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5488b6b747-g82fl"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.560886 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561178 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561326 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561571 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561625 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.561720 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.562107 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.567758 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.568233 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.568301 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.568240 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.569405 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.569594 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.571649 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.574011 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5488b6b747-g82fl"] Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.574767 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735748 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-client-ca\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735779 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5d77e9-afc8-4189-8d8a-94b71989f364-serving-cert\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735798 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-client-ca\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735836 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-proxy-ca-bundles\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735860 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984bz\" (UniqueName: \"kubernetes.io/projected/3149bced-bf2c-43ac-aec3-407029760012-kube-api-access-984bz\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.735946 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3149bced-bf2c-43ac-aec3-407029760012-serving-cert\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.736003 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-config\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.736040 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-config\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.736063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrbtf\" (UniqueName: \"kubernetes.io/projected/8f5d77e9-afc8-4189-8d8a-94b71989f364-kube-api-access-vrbtf\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837723 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3149bced-bf2c-43ac-aec3-407029760012-serving-cert\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-config\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837846 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-config\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrbtf\" (UniqueName: \"kubernetes.io/projected/8f5d77e9-afc8-4189-8d8a-94b71989f364-kube-api-access-vrbtf\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.837994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-client-ca\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5d77e9-afc8-4189-8d8a-94b71989f364-serving-cert\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838041 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-client-ca\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-proxy-ca-bundles\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.838127 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984bz\" (UniqueName: \"kubernetes.io/projected/3149bced-bf2c-43ac-aec3-407029760012-kube-api-access-984bz\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839237 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-client-ca\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-config\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-config\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839706 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3149bced-bf2c-43ac-aec3-407029760012-proxy-ca-bundles\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.839976 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8f5d77e9-afc8-4189-8d8a-94b71989f364-client-ca\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.843986 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8f5d77e9-afc8-4189-8d8a-94b71989f364-serving-cert\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.844872 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3149bced-bf2c-43ac-aec3-407029760012-serving-cert\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.861110 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrbtf\" (UniqueName: \"kubernetes.io/projected/8f5d77e9-afc8-4189-8d8a-94b71989f364-kube-api-access-vrbtf\") pod \"route-controller-manager-7cf8dccb89-xtgvl\" (UID: \"8f5d77e9-afc8-4189-8d8a-94b71989f364\") " pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.867337 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984bz\" (UniqueName: \"kubernetes.io/projected/3149bced-bf2c-43ac-aec3-407029760012-kube-api-access-984bz\") pod \"controller-manager-5488b6b747-g82fl\" (UID: \"3149bced-bf2c-43ac-aec3-407029760012\") " pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.877260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:48 crc kubenswrapper[5136]: I0320 06:54:48.884578 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.094748 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl"] Mar 20 06:54:49 crc kubenswrapper[5136]: W0320 06:54:49.120928 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f5d77e9_afc8_4189_8d8a_94b71989f364.slice/crio-1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c WatchSource:0}: Error finding container 1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c: Status 404 returned error can't find the container with id 1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.391607 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" event={"ID":"8f5d77e9-afc8-4189-8d8a-94b71989f364","Type":"ContainerStarted","Data":"b0180e2b5726b9f21fbe130daf7a93bc5222589cdf942788449c3a0bf5214f06"} Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.391668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" event={"ID":"8f5d77e9-afc8-4189-8d8a-94b71989f364","Type":"ContainerStarted","Data":"1672daf04981930abd8b3b7f7208a20e079d6d2d506738ed65439d9d36662e5c"} Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.391991 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.420723 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" podStartSLOduration=3.4207032330000002 podStartE2EDuration="3.420703233s" podCreationTimestamp="2026-03-20 06:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:49.408927527 +0000 UTC m=+321.668238678" watchObservedRunningTime="2026-03-20 06:54:49.420703233 +0000 UTC m=+321.680014384" Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.421125 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5488b6b747-g82fl"] Mar 20 06:54:49 crc kubenswrapper[5136]: W0320 06:54:49.424518 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3149bced_bf2c_43ac_aec3_407029760012.slice/crio-674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d WatchSource:0}: Error finding container 674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d: Status 404 returned error can't find the container with id 674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d Mar 20 06:54:49 crc kubenswrapper[5136]: I0320 06:54:49.651267 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cf8dccb89-xtgvl" Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.401542 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.401576 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" event={"ID":"3149bced-bf2c-43ac-aec3-407029760012","Type":"ContainerStarted","Data":"34b417e13f114a9ab9bc9cbea0eff9bc390c97b2d971b73b275a465b73e249a7"} Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.401592 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" event={"ID":"3149bced-bf2c-43ac-aec3-407029760012","Type":"ContainerStarted","Data":"674abe5085bba65b1ed3e19075049b42c0e5802333499f037765de605d3fb23d"} Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.403326 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" Mar 20 06:54:50 crc kubenswrapper[5136]: I0320 06:54:50.453127 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5488b6b747-g82fl" podStartSLOduration=4.453108886 podStartE2EDuration="4.453108886s" podCreationTimestamp="2026-03-20 06:54:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:54:50.431690193 +0000 UTC m=+322.691001344" watchObservedRunningTime="2026-03-20 06:54:50.453108886 +0000 UTC m=+322.712420037" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.893641 5136 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.894882 5136 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.894996 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895175 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895241 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895311 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895343 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895428 5136 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.895880 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895914 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.895337 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" gracePeriod=15 Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.895931 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896027 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896084 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896095 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896108 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896118 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896133 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896142 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896156 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896164 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896172 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896181 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896304 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896315 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896327 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896338 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896354 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896366 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896380 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896540 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896555 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: E0320 06:54:55.896577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896588 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.896734 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:55 crc kubenswrapper[5136]: I0320 06:54:55.897038 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058316 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058620 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058773 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058831 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058865 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.058966 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159742 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159777 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159845 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159870 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159934 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160022 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159928 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.159969 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160065 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160163 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.160294 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.428622 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430156 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430911 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430967 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430979 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.430992 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" exitCode=2 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.431026 5136 scope.go:117] "RemoveContainer" containerID="74f9f160dd259d212fce9938218319f63d5fa750ba11b8cf72adff768a3efdf2" Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.438887 5136 generic.go:334] "Generic (PLEG): container finished" podID="84671130-5991-4032-964a-01c61fefc56a" containerID="41fd895caa40c1a2b0a6165ebaa7bf7c883b138febb47e401574b2b95cc9077c" exitCode=0 Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.438922 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerDied","Data":"41fd895caa40c1a2b0a6165ebaa7bf7c883b138febb47e401574b2b95cc9077c"} Mar 20 06:54:56 crc kubenswrapper[5136]: I0320 06:54:56.439614 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.448829 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.834717 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.835235 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.994768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") pod \"84671130-5991-4032-964a-01c61fefc56a\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.995186 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") pod \"84671130-5991-4032-964a-01c61fefc56a\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " Mar 20 06:54:57 crc kubenswrapper[5136]: I0320 06:54:57.995257 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") pod \"84671130-5991-4032-964a-01c61fefc56a\" (UID: \"84671130-5991-4032-964a-01c61fefc56a\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:57.999679 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock" (OuterVolumeSpecName: "var-lock") pod "84671130-5991-4032-964a-01c61fefc56a" (UID: "84671130-5991-4032-964a-01c61fefc56a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:57.999742 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "84671130-5991-4032-964a-01c61fefc56a" (UID: "84671130-5991-4032-964a-01c61fefc56a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.007707 5136 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.014149 5136 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/84671130-5991-4032-964a-01c61fefc56a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.039039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "84671130-5991-4032-964a-01c61fefc56a" (UID: "84671130-5991-4032-964a-01c61fefc56a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.115257 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/84671130-5991-4032-964a-01c61fefc56a-kube-api-access\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.254762 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.256104 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.256799 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.257131 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317355 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317413 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317498 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317539 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317609 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317675 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317898 5136 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317910 5136 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.317918 5136 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.398862 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.399090 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.403978 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.457954 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.458579 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" exitCode=0 Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.458674 5136 scope.go:117] "RemoveContainer" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.458693 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.459491 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.459929 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.460265 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"84671130-5991-4032-964a-01c61fefc56a","Type":"ContainerDied","Data":"a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0"} Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.460294 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6ea3be03aea5ddfc716d4a3eee4ac608fed37a3195883a5113f85c728e131f0" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.460380 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.462207 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.462524 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.463455 5136 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.463838 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.475141 5136 scope.go:117] "RemoveContainer" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.488594 5136 scope.go:117] "RemoveContainer" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.504117 5136 scope.go:117] "RemoveContainer" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.518602 5136 scope.go:117] "RemoveContainer" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.533261 5136 scope.go:117] "RemoveContainer" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.550527 5136 scope.go:117] "RemoveContainer" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.550942 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\": container with ID starting with d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477 not found: ID does not exist" containerID="d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.550978 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477"} err="failed to get container status \"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\": rpc error: code = NotFound desc = could not find container \"d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477\": container with ID starting with d9b61e008901ecac38361e2e73cf1ed15fb86043b1677e47f4cf1dbb3ac59477 not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551004 5136 scope.go:117] "RemoveContainer" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.551263 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\": container with ID starting with 5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa not found: ID does not exist" containerID="5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551315 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa"} err="failed to get container status \"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\": rpc error: code = NotFound desc = could not find container \"5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa\": container with ID starting with 5e093a09d1cc89422ca4816f8679a0d0db8a060dace9d1666d68a8691f2b03fa not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551349 5136 scope.go:117] "RemoveContainer" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.551616 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\": container with ID starting with 98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f not found: ID does not exist" containerID="98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551653 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f"} err="failed to get container status \"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\": rpc error: code = NotFound desc = could not find container \"98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f\": container with ID starting with 98080344c6b24f5633a1e91c54d040474b8b9f35717a8a700906f29760691a2f not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551677 5136 scope.go:117] "RemoveContainer" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.551959 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\": container with ID starting with 086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc not found: ID does not exist" containerID="086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.551993 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc"} err="failed to get container status \"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\": rpc error: code = NotFound desc = could not find container \"086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc\": container with ID starting with 086f5aa828d353aa25dbbec819e6dbb6914ec948ffb7c07c693b7bb8fe2659bc not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552015 5136 scope.go:117] "RemoveContainer" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.552242 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\": container with ID starting with 430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080 not found: ID does not exist" containerID="430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552278 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080"} err="failed to get container status \"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\": rpc error: code = NotFound desc = could not find container \"430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080\": container with ID starting with 430d702e428138933a1b26647174b9db5ffe4abc3dac393a24223ebf9d1af080 not found: ID does not exist" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552305 5136 scope.go:117] "RemoveContainer" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" Mar 20 06:54:58 crc kubenswrapper[5136]: E0320 06:54:58.552554 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\": container with ID starting with 92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0 not found: ID does not exist" containerID="92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0" Mar 20 06:54:58 crc kubenswrapper[5136]: I0320 06:54:58.552587 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0"} err="failed to get container status \"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\": rpc error: code = NotFound desc = could not find container \"92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0\": container with ID starting with 92269314c8a9cad77c6c5e36a48bb74f8b839462a4eb2916705bd8ae0432bab0 not found: ID does not exist" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.352149 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.353061 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.353585 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.354012 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.354427 5136 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:54:59 crc kubenswrapper[5136]: I0320 06:54:59.354486 5136 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.354954 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="200ms" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.556236 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="400ms" Mar 20 06:54:59 crc kubenswrapper[5136]: E0320 06:54:59.957209 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="800ms" Mar 20 06:55:00 crc kubenswrapper[5136]: E0320 06:55:00.757797 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="1.6s" Mar 20 06:55:00 crc kubenswrapper[5136]: E0320 06:55:00.931509 5136 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:00 crc kubenswrapper[5136]: I0320 06:55:00.932133 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:00 crc kubenswrapper[5136]: E0320 06:55:00.981017 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e7a3be9daf387 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,LastTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:55:01 crc kubenswrapper[5136]: I0320 06:55:01.482653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528"} Mar 20 06:55:01 crc kubenswrapper[5136]: I0320 06:55:01.482703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"95d24620254c3f60d5c7bc369c215909ecbb4fc8aadf715024c7ab1ece7f1ef1"} Mar 20 06:55:01 crc kubenswrapper[5136]: E0320 06:55:01.483292 5136 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:01 crc kubenswrapper[5136]: I0320 06:55:01.483501 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.358798 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="3.2s" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467133 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467315 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:55:02 crc kubenswrapper[5136]: I0320 06:55:02.467359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:55:02 crc kubenswrapper[5136]: W0320 06:55:02.468366 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:02 crc kubenswrapper[5136]: W0320 06:55:02.468387 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.468436 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.468496 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:02 crc kubenswrapper[5136]: W0320 06:55:02.468375 5136 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:02 crc kubenswrapper[5136]: E0320 06:55:02.468602 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468584 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468746 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468670 5136 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468950 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:05.468921637 +0000 UTC m=+457.728232828 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.468715 5136 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: failed to sync secret cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.469013 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:05.469000099 +0000 UTC m=+457.728311280 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : failed to sync secret cache: timed out waiting for the condition Mar 20 06:55:03 crc kubenswrapper[5136]: W0320 06:55:03.469556 5136 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:03 crc kubenswrapper[5136]: E0320 06:55:03.469639 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:04 crc kubenswrapper[5136]: W0320 06:55:04.217229 5136 reflector.go:561] object-"openshift-network-diagnostics"/"kube-root-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.217538 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469728 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469764 5136 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469792 5136 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469839 5136 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469923 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:06.469890137 +0000 UTC m=+458.729201318 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:04 crc kubenswrapper[5136]: E0320 06:55:04.469952 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-03-20 06:57:06.469938639 +0000 UTC m=+458.729249830 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : failed to sync configmap cache: timed out waiting for the condition Mar 20 06:55:05 crc kubenswrapper[5136]: W0320 06:55:05.319214 5136 reflector.go:561] object-"openshift-network-diagnostics"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.319294 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:05 crc kubenswrapper[5136]: W0320 06:55:05.325711 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.326013 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:05 crc kubenswrapper[5136]: W0320 06:55:05.518277 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.518343 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:05 crc kubenswrapper[5136]: E0320 06:55:05.560760 5136 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.163:6443: connect: connection refused" interval="6.4s" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.396461 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.401287 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.402797 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.424236 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.424280 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:08 crc kubenswrapper[5136]: E0320 06:55:08.424891 5136 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.425646 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:08 crc kubenswrapper[5136]: W0320 06:55:08.455050 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46 WatchSource:0}: Error finding container ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46: Status 404 returned error can't find the container with id ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46 Mar 20 06:55:08 crc kubenswrapper[5136]: I0320 06:55:08.526939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ddd4937024c47ced41cf5247b2d45e9606abee030a043c30ed646d61dd30fc46"} Mar 20 06:55:08 crc kubenswrapper[5136]: W0320 06:55:08.690581 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin-cert": failed to list *v1.Secret: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:08 crc kubenswrapper[5136]: E0320 06:55:08.690658 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/secrets?fieldSelector=metadata.name%3Dnetworking-console-plugin-cert&resourceVersion=27466\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:09 crc kubenswrapper[5136]: E0320 06:55:09.155449 5136 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.163:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189e7a3be9daf387 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,LastTimestamp:2026-03-20 06:55:00.975653767 +0000 UTC m=+333.234964948,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Mar 20 06:55:09 crc kubenswrapper[5136]: W0320 06:55:09.324222 5136 reflector.go:561] object-"openshift-network-console"/"networking-console-plugin": failed to list *v1.ConfigMap: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286": dial tcp 38.102.83.163:6443: connect: connection refused Mar 20 06:55:09 crc kubenswrapper[5136]: E0320 06:55:09.324582 5136 reflector.go:158] "Unhandled Error" err="object-\"openshift-network-console\"/\"networking-console-plugin\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/configmaps?fieldSelector=metadata.name%3Dnetworking-console-plugin&resourceVersion=27286\": dial tcp 38.102.83.163:6443: connect: connection refused" logger="UnhandledError" Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536047 5136 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="88b0cbe92c9f0fe76ecab0aa146da4e3461a1d5f219123646bdf76ea34d956bf" exitCode=0 Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"88b0cbe92c9f0fe76ecab0aa146da4e3461a1d5f219123646bdf76ea34d956bf"} Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536501 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.536537 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:09 crc kubenswrapper[5136]: E0320 06:55:09.537170 5136 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:09 crc kubenswrapper[5136]: I0320 06:55:09.537370 5136 status_manager.go:851] "Failed to get status for pod" podUID="84671130-5991-4032-964a-01c61fefc56a" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.163:6443: connect: connection refused" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.544773 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.550532 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.550590 5136 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b" exitCode=1 Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.550661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b"} Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.551282 5136 scope.go:117] "RemoveContainer" containerID="e4ffc6e9e66392f9782e0c5819ffe67f8f8cf8cc7c85bffa057ca94358c4963b" Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.556597 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"61226ca0c085a343c95b09c9b819acaaa4ba1895031b0a680e5bfd2098f9f7e2"} Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.556644 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"21a1179f8567f2b618fbadcb6ed8b409d1bc09df00d1a5a193de4fe073f9ff6c"} Mar 20 06:55:10 crc kubenswrapper[5136]: I0320 06:55:10.556659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"cf6591953a1289a1ff78c791a6c1925653ce1499d8d4867190d4b19a8483a46c"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.563499 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/1.log" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.564939 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.565015 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"26f0afdf3d65ac7e25470e077a90b43302807523c1a95f7a25c6bdc1282e76fb"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567397 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"f861ccd3228a0112c42d58802873e29995142f050dab275f773ed0d441231a89"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567434 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"33de0156831f4e0c973f4efb890c2f77371bc68b881e4ab95945898fa7c40b1f"} Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567541 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567580 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:11 crc kubenswrapper[5136]: I0320 06:55:11.567595 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:12 crc kubenswrapper[5136]: I0320 06:55:12.284250 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:12 crc kubenswrapper[5136]: I0320 06:55:12.288429 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:12 crc kubenswrapper[5136]: I0320 06:55:12.574190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:13 crc kubenswrapper[5136]: I0320 06:55:13.425799 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:13 crc kubenswrapper[5136]: I0320 06:55:13.426232 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:13 crc kubenswrapper[5136]: I0320 06:55:13.431274 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.267099 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.267234 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.577275 5136 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.588938 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 20 06:55:16 crc kubenswrapper[5136]: I0320 06:55:16.928162 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 20 06:55:17 crc kubenswrapper[5136]: I0320 06:55:17.601349 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:17 crc kubenswrapper[5136]: I0320 06:55:17.601904 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:17 crc kubenswrapper[5136]: I0320 06:55:17.604946 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.412928 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8a2d5f4-1753-4860-bb3b-523d24d7c10a" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.605448 5136 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.605480 5136 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="08f93948-dc0a-4e68-80fa-26429c0c0654" Mar 20 06:55:18 crc kubenswrapper[5136]: I0320 06:55:18.608584 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="f8a2d5f4-1753-4860-bb3b-523d24d7c10a" Mar 20 06:55:23 crc kubenswrapper[5136]: E0320 06:55:23.418247 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-cqllr], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Mar 20 06:55:23 crc kubenswrapper[5136]: E0320 06:55:23.431193 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[networking-console-plugin-cert nginx-conf], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Mar 20 06:55:23 crc kubenswrapper[5136]: E0320 06:55:23.439288 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-s2dwl], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Mar 20 06:55:25 crc kubenswrapper[5136]: I0320 06:55:25.951045 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.339343 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.412528 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.615773 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.654678 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.757480 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 20 06:55:26 crc kubenswrapper[5136]: I0320 06:55:26.879972 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.446333 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.492191 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.886232 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 20 06:55:27 crc kubenswrapper[5136]: I0320 06:55:27.933489 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.059897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.196852 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.480402 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.941498 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.966146 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 20 06:55:28 crc kubenswrapper[5136]: I0320 06:55:28.999332 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.019377 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.021038 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.031661 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.047678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.299506 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.414561 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.472743 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.493568 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.576833 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.609654 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.653643 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.695602 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.861737 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 20 06:55:29 crc kubenswrapper[5136]: I0320 06:55:29.897109 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.111755 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.127990 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.181154 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.342595 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.582897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.798785 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.827124 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.968358 5136 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:30 crc kubenswrapper[5136]: I0320 06:55:30.984473 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.032293 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.040247 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.046002 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.073541 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.083584 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.146476 5136 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.184458 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.224416 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.435468 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.529927 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.549905 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.582893 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.654285 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.664022 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.691946 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.696052 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.696061 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.697217 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.731127 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.782056 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.889633 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.929767 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.930054 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 20 06:55:31 crc kubenswrapper[5136]: I0320 06:55:31.935619 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.303649 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.349356 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.353516 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.387716 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.537467 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.537986 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.590947 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.665633 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.703702 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Mar 20 06:55:32 crc kubenswrapper[5136]: I0320 06:55:32.713724 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.003537 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.059652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.105341 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.125175 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.127219 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.377310 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.560641 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.569627 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.584791 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.616174 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.708585 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.710049 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.810026 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.919198 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.930005 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.944843 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Mar 20 06:55:33 crc kubenswrapper[5136]: I0320 06:55:33.981279 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.004179 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.108957 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.119215 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.133098 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.145411 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.147300 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.184636 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.252434 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.316473 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.321113 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.378692 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.391540 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.395807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.515566 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.518378 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.595487 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.601119 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.693751 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.715897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.908336 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Mar 20 06:55:34 crc kubenswrapper[5136]: I0320 06:55:34.974362 5136 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.020511 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.031829 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.093943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.171911 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.546894 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.547994 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.569599 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.678337 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.682413 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.770448 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 20 06:55:35 crc kubenswrapper[5136]: I0320 06:55:35.945709 5136 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.021723 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.115967 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.143476 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.161539 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.213436 5136 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.214051 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.218188 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.218235 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.222552 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.243313 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.255610 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=20.255590746 podStartE2EDuration="20.255590746s" podCreationTimestamp="2026-03-20 06:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:55:36.234299475 +0000 UTC m=+368.493610646" watchObservedRunningTime="2026-03-20 06:55:36.255590746 +0000 UTC m=+368.514901887" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.274250 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.280507 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.351081 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.364218 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.375156 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.394489 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.394552 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.395625 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.454043 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.461292 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.560098 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.642463 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.664838 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.704042 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.704363 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.756196 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.788050 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.823194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.871726 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.899228 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 20 06:55:36 crc kubenswrapper[5136]: I0320 06:55:36.942538 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.135837 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.160627 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.307158 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.313800 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.369801 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.405315 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.426667 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.429927 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.447496 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.448998 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.482834 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.562448 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.595412 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.676653 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.750790 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.750804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.752518 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Mar 20 06:55:37 crc kubenswrapper[5136]: I0320 06:55:37.871116 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.092646 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.108803 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.129545 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.226745 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.248780 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.256470 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.272723 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.304227 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.344917 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.358727 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.373717 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.396380 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.406296 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.464831 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.503219 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.608595 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.683421 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.724990 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.751696 5136 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.807879 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.850351 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.946399 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.978032 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.984027 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 20 06:55:38 crc kubenswrapper[5136]: I0320 06:55:38.985942 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.027097 5136 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.027313 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" gracePeriod=5 Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.045967 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.082302 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.088127 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.110201 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.253235 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.292320 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.338432 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.448357 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.470106 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.518376 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.519722 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.553348 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.560499 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.631084 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.634029 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.650311 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.918194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Mar 20 06:55:39 crc kubenswrapper[5136]: I0320 06:55:39.960506 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.042720 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.108894 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.166239 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.356021 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.433893 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.465754 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.482146 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.493336 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.581745 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.594297 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.618426 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.692227 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.710425 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.733701 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.747478 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.795634 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.844602 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 20 06:55:40 crc kubenswrapper[5136]: I0320 06:55:40.920200 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.031499 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.123994 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.159063 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.159678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.177315 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.346073 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.423772 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.468910 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.596935 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.730265 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 20 06:55:41 crc kubenswrapper[5136]: I0320 06:55:41.931132 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.163449 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.463540 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.502717 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.516863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 20 06:55:42 crc kubenswrapper[5136]: I0320 06:55:42.706793 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.443454 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.521699 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.560600 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 20 06:55:43 crc kubenswrapper[5136]: I0320 06:55:43.693375 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.547540 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.602657 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.602728 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.681535 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.746852 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.746895 5136 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" exitCode=137 Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.746934 5136 scope.go:117] "RemoveContainer" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.747009 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749576 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749671 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749728 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749798 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749802 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.749935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750073 5136 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750087 5136 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750097 5136 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.750105 5136 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.761255 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.764684 5136 scope.go:117] "RemoveContainer" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" Mar 20 06:55:44 crc kubenswrapper[5136]: E0320 06:55:44.765170 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528\": container with ID starting with bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528 not found: ID does not exist" containerID="bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.765233 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528"} err="failed to get container status \"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528\": rpc error: code = NotFound desc = could not find container \"bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528\": container with ID starting with bcb68becd66fa9e6491a4cc4d11d6fcc7c5262eddecaba69dcc375b576472528 not found: ID does not exist" Mar 20 06:55:44 crc kubenswrapper[5136]: I0320 06:55:44.852037 5136 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Mar 20 06:55:46 crc kubenswrapper[5136]: I0320 06:55:46.407568 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Mar 20 06:55:58 crc kubenswrapper[5136]: I0320 06:55:58.839093 5136 generic.go:334] "Generic (PLEG): container finished" podID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" exitCode=0 Mar 20 06:55:58 crc kubenswrapper[5136]: I0320 06:55:58.839208 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerDied","Data":"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a"} Mar 20 06:55:58 crc kubenswrapper[5136]: I0320 06:55:58.839962 5136 scope.go:117] "RemoveContainer" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:55:59 crc kubenswrapper[5136]: I0320 06:55:59.847780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerStarted","Data":"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7"} Mar 20 06:55:59 crc kubenswrapper[5136]: I0320 06:55:59.848559 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:55:59 crc kubenswrapper[5136]: I0320 06:55:59.850643 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154061 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 06:56:00 crc kubenswrapper[5136]: E0320 06:56:00.154288 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84671130-5991-4032-964a-01c61fefc56a" containerName="installer" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154303 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="84671130-5991-4032-964a-01c61fefc56a" containerName="installer" Mar 20 06:56:00 crc kubenswrapper[5136]: E0320 06:56:00.154329 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154336 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154467 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154480 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="84671130-5991-4032-964a-01c61fefc56a" containerName="installer" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.154916 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.156621 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.156979 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.161220 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.162659 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.294341 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"auto-csr-approver-29566496-kkbk6\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.399717 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"auto-csr-approver-29566496-kkbk6\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.424665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"auto-csr-approver-29566496-kkbk6\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.474378 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:00 crc kubenswrapper[5136]: I0320 06:56:00.868416 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 06:56:00 crc kubenswrapper[5136]: W0320 06:56:00.884082 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7123c3cf_7f09_4f1f_a99f_b5a3a27c54eb.slice/crio-0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd WatchSource:0}: Error finding container 0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd: Status 404 returned error can't find the container with id 0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd Mar 20 06:56:01 crc kubenswrapper[5136]: I0320 06:56:01.862254 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" event={"ID":"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb","Type":"ContainerStarted","Data":"0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd"} Mar 20 06:56:02 crc kubenswrapper[5136]: I0320 06:56:02.869619 5136 generic.go:334] "Generic (PLEG): container finished" podID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerID="65e785eb1dbd67ac0fede3f7a5dc27c137ef39fb9832a3755e23a954eb908065" exitCode=0 Mar 20 06:56:02 crc kubenswrapper[5136]: I0320 06:56:02.869667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" event={"ID":"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb","Type":"ContainerDied","Data":"65e785eb1dbd67ac0fede3f7a5dc27c137ef39fb9832a3755e23a954eb908065"} Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.187581 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.342188 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") pod \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\" (UID: \"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb\") " Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.348007 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z" (OuterVolumeSpecName: "kube-api-access-sz79z") pod "7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" (UID: "7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb"). InnerVolumeSpecName "kube-api-access-sz79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.443621 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz79z\" (UniqueName: \"kubernetes.io/projected/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb-kube-api-access-sz79z\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.888443 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" event={"ID":"7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb","Type":"ContainerDied","Data":"0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd"} Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.889009 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ac0187a5ca2535e86930385e33b2252f6bb635a7b36aa2186b83faab3d3adbd" Mar 20 06:56:04 crc kubenswrapper[5136]: I0320 06:56:04.889106 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566496-kkbk6" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.208432 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.209387 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gnspw" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" containerID="cri-o://5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.215040 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.215356 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hjck6" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" containerID="cri-o://061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.227672 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.227917 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" containerID="cri-o://a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.237374 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.237621 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zvjw4" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" containerID="cri-o://52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.254130 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2lb"] Mar 20 06:56:28 crc kubenswrapper[5136]: E0320 06:56:28.254448 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerName="oc" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.254464 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerName="oc" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.254571 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" containerName="oc" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.255096 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.263081 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.263651 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w76x4" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" containerID="cri-o://96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" gracePeriod=30 Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.276310 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2lb"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.361324 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.361512 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd7lq\" (UniqueName: \"kubernetes.io/projected/37de93ad-331e-41ee-8f74-523100e01b09-kube-api-access-kd7lq\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.361629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.463252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd7lq\" (UniqueName: \"kubernetes.io/projected/37de93ad-331e-41ee-8f74-523100e01b09-kube-api-access-kd7lq\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.463337 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.463400 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.465595 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.469290 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/37de93ad-331e-41ee-8f74-523100e01b09-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.480354 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd7lq\" (UniqueName: \"kubernetes.io/projected/37de93ad-331e-41ee-8f74-523100e01b09-kube-api-access-kd7lq\") pod \"marketplace-operator-79b997595-sl2lb\" (UID: \"37de93ad-331e-41ee-8f74-523100e01b09\") " pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.638458 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.661861 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.766538 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") pod \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.766665 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") pod \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.766723 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") pod \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\" (UID: \"899bb83b-4a95-49e5-8e8f-50c309b5d5e1\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.767403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities" (OuterVolumeSpecName: "utilities") pod "899bb83b-4a95-49e5-8e8f-50c309b5d5e1" (UID: "899bb83b-4a95-49e5-8e8f-50c309b5d5e1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.775947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr" (OuterVolumeSpecName: "kube-api-access-mqscr") pod "899bb83b-4a95-49e5-8e8f-50c309b5d5e1" (UID: "899bb83b-4a95-49e5-8e8f-50c309b5d5e1"). InnerVolumeSpecName "kube-api-access-mqscr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.790390 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.825883 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.837889 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.845122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "899bb83b-4a95-49e5-8e8f-50c309b5d5e1" (UID: "899bb83b-4a95-49e5-8e8f-50c309b5d5e1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") pod \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") pod \"289bd2af-981a-4da9-af4b-77ef6fd7e526\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874433 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") pod \"289bd2af-981a-4da9-af4b-77ef6fd7e526\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874531 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") pod \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874572 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") pod \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874648 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") pod \"289bd2af-981a-4da9-af4b-77ef6fd7e526\" (UID: \"289bd2af-981a-4da9-af4b-77ef6fd7e526\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874688 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") pod \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874739 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") pod \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\" (UID: \"8a3a1d9c-1870-4a43-95fb-6d07e5619acb\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.874778 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") pod \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\" (UID: \"ff9e0ea6-add4-4087-83a6-f8d85588d6f2\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.875314 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.875343 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.875359 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqscr\" (UniqueName: \"kubernetes.io/projected/899bb83b-4a95-49e5-8e8f-50c309b5d5e1-kube-api-access-mqscr\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.876349 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities" (OuterVolumeSpecName: "utilities") pod "ff9e0ea6-add4-4087-83a6-f8d85588d6f2" (UID: "ff9e0ea6-add4-4087-83a6-f8d85588d6f2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.885719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "289bd2af-981a-4da9-af4b-77ef6fd7e526" (UID: "289bd2af-981a-4da9-af4b-77ef6fd7e526"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.886042 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities" (OuterVolumeSpecName: "utilities") pod "8a3a1d9c-1870-4a43-95fb-6d07e5619acb" (UID: "8a3a1d9c-1870-4a43-95fb-6d07e5619acb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.892223 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w" (OuterVolumeSpecName: "kube-api-access-bs57w") pod "8a3a1d9c-1870-4a43-95fb-6d07e5619acb" (UID: "8a3a1d9c-1870-4a43-95fb-6d07e5619acb"). InnerVolumeSpecName "kube-api-access-bs57w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.900982 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq" (OuterVolumeSpecName: "kube-api-access-t9jhq") pod "ff9e0ea6-add4-4087-83a6-f8d85588d6f2" (UID: "ff9e0ea6-add4-4087-83a6-f8d85588d6f2"). InnerVolumeSpecName "kube-api-access-t9jhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.901356 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96" (OuterVolumeSpecName: "kube-api-access-s4p96") pod "289bd2af-981a-4da9-af4b-77ef6fd7e526" (UID: "289bd2af-981a-4da9-af4b-77ef6fd7e526"). InnerVolumeSpecName "kube-api-access-s4p96". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.923018 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "289bd2af-981a-4da9-af4b-77ef6fd7e526" (UID: "289bd2af-981a-4da9-af4b-77ef6fd7e526"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.924049 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.934803 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a3a1d9c-1870-4a43-95fb-6d07e5619acb" (UID: "8a3a1d9c-1870-4a43-95fb-6d07e5619acb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976185 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") pod \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976231 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") pod \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976280 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") pod \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\" (UID: \"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5\") " Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976456 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976472 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9jhq\" (UniqueName: \"kubernetes.io/projected/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-kube-api-access-t9jhq\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976483 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976491 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4p96\" (UniqueName: \"kubernetes.io/projected/289bd2af-981a-4da9-af4b-77ef6fd7e526-kube-api-access-s4p96\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976500 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976507 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976515 5136 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/289bd2af-981a-4da9-af4b-77ef6fd7e526-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.976524 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs57w\" (UniqueName: \"kubernetes.io/projected/8a3a1d9c-1870-4a43-95fb-6d07e5619acb-kube-api-access-bs57w\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.978970 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-sl2lb"] Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.979710 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm" (OuterVolumeSpecName: "kube-api-access-jnvxm") pod "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" (UID: "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5"). InnerVolumeSpecName "kube-api-access-jnvxm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:56:28 crc kubenswrapper[5136]: I0320 06:56:28.979713 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities" (OuterVolumeSpecName: "utilities") pod "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" (UID: "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.004144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" (UID: "301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015729 5136 generic.go:334] "Generic (PLEG): container finished" podID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015783 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015808 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zvjw4" event={"ID":"301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5","Type":"ContainerDied","Data":"20ff1b017eb3949453a5c8e4e818cc934899998a077617b8ba9ac7f5655008d9"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015846 5136 scope.go:117] "RemoveContainer" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.015947 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zvjw4" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.023027 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" event={"ID":"37de93ad-331e-41ee-8f74-523100e01b09","Type":"ContainerStarted","Data":"368a61c7f48a01e7f5d4b69cf2321dc8dd8fda7ffb3f54cbdec66ed733ae02af"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025015 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025064 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025083 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w76x4" event={"ID":"ff9e0ea6-add4-4087-83a6-f8d85588d6f2","Type":"ContainerDied","Data":"b157f4d8fc06233ee1c508206beda933efb32ecebe1735837a9ebedb70d95894"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.025096 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w76x4" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026604 5136 generic.go:334] "Generic (PLEG): container finished" podID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerDied","Data":"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026666 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" event={"ID":"289bd2af-981a-4da9-af4b-77ef6fd7e526","Type":"ContainerDied","Data":"8ab9396d1b0bd00b43015624038265fccd12c5928575d3620513f24c6d495ec3"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.026700 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-mbfm4" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.028949 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.029016 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gnspw" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.029032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.030789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gnspw" event={"ID":"8a3a1d9c-1870-4a43-95fb-6d07e5619acb","Type":"ContainerDied","Data":"3e121a671baa07140a3d1cad1e8e105a436e8a55fb9911545361353494c2ebed"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033478 5136 generic.go:334] "Generic (PLEG): container finished" podID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" exitCode=0 Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033506 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hjck6" event={"ID":"899bb83b-4a95-49e5-8e8f-50c309b5d5e1","Type":"ContainerDied","Data":"a72e48c3682399c05912ec6fcf4bd3347709282c92c6d1cf4cee81749234bee6"} Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.033590 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hjck6" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.036544 5136 scope.go:117] "RemoveContainer" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.072315 5136 scope.go:117] "RemoveContainer" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.073036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff9e0ea6-add4-4087-83a6-f8d85588d6f2" (UID: "ff9e0ea6-add4-4087-83a6-f8d85588d6f2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.076626 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080004 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080169 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080179 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff9e0ea6-add4-4087-83a6-f8d85588d6f2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.080188 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnvxm\" (UniqueName: \"kubernetes.io/projected/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5-kube-api-access-jnvxm\") on node \"crc\" DevicePath \"\"" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.084249 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-mbfm4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.089890 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.092954 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zvjw4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.096228 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.101301 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hjck6"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.104840 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.106662 5136 scope.go:117] "RemoveContainer" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.107070 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5\": container with ID starting with 52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5 not found: ID does not exist" containerID="52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107112 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5"} err="failed to get container status \"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5\": rpc error: code = NotFound desc = could not find container \"52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5\": container with ID starting with 52f516ab12d99512d73f419dcdb0b2dce29d3e61b9dafa84863f5c5a03f74ee5 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107139 5136 scope.go:117] "RemoveContainer" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.107827 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968\": container with ID starting with dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968 not found: ID does not exist" containerID="dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107855 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968"} err="failed to get container status \"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968\": rpc error: code = NotFound desc = could not find container \"dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968\": container with ID starting with dcf8682364592d40230750711520ec56e59f7e993372f1598195878cd5fec968 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.107888 5136 scope.go:117] "RemoveContainer" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.108086 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gnspw"] Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.108120 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d\": container with ID starting with e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d not found: ID does not exist" containerID="e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.108141 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d"} err="failed to get container status \"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d\": rpc error: code = NotFound desc = could not find container \"e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d\": container with ID starting with e61e26151ba5d22e43e23f8384fad74f020a5d4b865b181bec56056cb9b3e52d not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.108156 5136 scope.go:117] "RemoveContainer" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.119950 5136 scope.go:117] "RemoveContainer" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.140856 5136 scope.go:117] "RemoveContainer" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152036 5136 scope.go:117] "RemoveContainer" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.152476 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8\": container with ID starting with 96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8 not found: ID does not exist" containerID="96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152514 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8"} err="failed to get container status \"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8\": rpc error: code = NotFound desc = could not find container \"96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8\": container with ID starting with 96a902822e546e09f7a3de1657b0dcec6395a8a029066a0684766a1f0066cca8 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152540 5136 scope.go:117] "RemoveContainer" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.152891 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784\": container with ID starting with aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784 not found: ID does not exist" containerID="aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152912 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784"} err="failed to get container status \"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784\": rpc error: code = NotFound desc = could not find container \"aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784\": container with ID starting with aab3768f3e229745fc4445cf28136207dc998f4d0eb15cc907b6c70eec03b784 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.152927 5136 scope.go:117] "RemoveContainer" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.153174 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675\": container with ID starting with 0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675 not found: ID does not exist" containerID="0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.153193 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675"} err="failed to get container status \"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675\": rpc error: code = NotFound desc = could not find container \"0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675\": container with ID starting with 0de5278ba167eea205a249eb3cbbe0e4bb17b3e9cc3e26b00698b091881ee675 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.153205 5136 scope.go:117] "RemoveContainer" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.169719 5136 scope.go:117] "RemoveContainer" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.181538 5136 scope.go:117] "RemoveContainer" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.181835 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7\": container with ID starting with a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7 not found: ID does not exist" containerID="a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.181860 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7"} err="failed to get container status \"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7\": rpc error: code = NotFound desc = could not find container \"a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7\": container with ID starting with a36097ca86c93e84e1fc63afa5daf31c84532f6db102fdbe3ea06918e836f5d7 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.181880 5136 scope.go:117] "RemoveContainer" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.182298 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a\": container with ID starting with 83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a not found: ID does not exist" containerID="83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.182316 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a"} err="failed to get container status \"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a\": rpc error: code = NotFound desc = could not find container \"83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a\": container with ID starting with 83709a4627913d4131e8b55a35967c82cd8a60e366ebf5644c739dab2726f14a not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.182329 5136 scope.go:117] "RemoveContainer" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.200077 5136 scope.go:117] "RemoveContainer" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.214172 5136 scope.go:117] "RemoveContainer" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.229404 5136 scope.go:117] "RemoveContainer" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.229717 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780\": container with ID starting with 5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780 not found: ID does not exist" containerID="5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.229745 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780"} err="failed to get container status \"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780\": rpc error: code = NotFound desc = could not find container \"5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780\": container with ID starting with 5664be994ef1d148ab5592ebabed18dba44cfe3e1131eec1b13e05caa0a92780 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.229766 5136 scope.go:117] "RemoveContainer" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.230051 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71\": container with ID starting with 53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71 not found: ID does not exist" containerID="53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230074 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71"} err="failed to get container status \"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71\": rpc error: code = NotFound desc = could not find container \"53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71\": container with ID starting with 53bde684225bb133cb5df58d7ee1ac04c8c2617d28ec0ad948d7f53954e6eb71 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230087 5136 scope.go:117] "RemoveContainer" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.230607 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958\": container with ID starting with ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958 not found: ID does not exist" containerID="ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230630 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958"} err="failed to get container status \"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958\": rpc error: code = NotFound desc = could not find container \"ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958\": container with ID starting with ecdb1a90af5f961f167e677237df30a562882d3df07533b14a56db3666fc4958 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.230644 5136 scope.go:117] "RemoveContainer" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.243766 5136 scope.go:117] "RemoveContainer" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.258216 5136 scope.go:117] "RemoveContainer" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.269766 5136 scope.go:117] "RemoveContainer" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.270171 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1\": container with ID starting with 061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1 not found: ID does not exist" containerID="061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270228 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1"} err="failed to get container status \"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1\": rpc error: code = NotFound desc = could not find container \"061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1\": container with ID starting with 061ea93c85bc00917c3ac8ea8d30b338d02a490b0ecea2ffda90681157877bc1 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270256 5136 scope.go:117] "RemoveContainer" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.270706 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c\": container with ID starting with 61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c not found: ID does not exist" containerID="61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270756 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c"} err="failed to get container status \"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c\": rpc error: code = NotFound desc = could not find container \"61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c\": container with ID starting with 61c7e442e15bdf8dabbf86eee0b74ef3d0bd7331f7f4cb4df8d344122f5d037c not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.270786 5136 scope.go:117] "RemoveContainer" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.271080 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5\": container with ID starting with eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5 not found: ID does not exist" containerID="eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.271115 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5"} err="failed to get container status \"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5\": rpc error: code = NotFound desc = could not find container \"eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5\": container with ID starting with eab31ab4f8f6aed5a537d839989db0fed9674585ea48c1c98222ce1535b9e5f5 not found: ID does not exist" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.391603 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.395705 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w76x4"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822063 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2mt29"] Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822302 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822320 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822336 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822345 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822359 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822367 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822378 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822386 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822398 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822405 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822415 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822423 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822431 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822438 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822449 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822456 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822467 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822474 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822484 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822494 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="extract-content" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822505 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822514 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822523 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822531 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822540 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822547 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="extract-utilities" Mar 20 06:56:29 crc kubenswrapper[5136]: E0320 06:56:29.822558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822565 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822660 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822677 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822687 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822696 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822709 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" containerName="registry-server" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.822724 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" containerName="marketplace-operator" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.823390 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.826487 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.839310 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mt29"] Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.888374 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mxtm\" (UniqueName: \"kubernetes.io/projected/75cf71d1-5e27-4089-bf58-1f389690d498-kube-api-access-5mxtm\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.888412 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-catalog-content\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.888457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-utilities\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.989927 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mxtm\" (UniqueName: \"kubernetes.io/projected/75cf71d1-5e27-4089-bf58-1f389690d498-kube-api-access-5mxtm\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.989964 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-catalog-content\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.990011 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-utilities\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.990444 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-utilities\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:29 crc kubenswrapper[5136]: I0320 06:56:29.990529 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/75cf71d1-5e27-4089-bf58-1f389690d498-catalog-content\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.008536 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mxtm\" (UniqueName: \"kubernetes.io/projected/75cf71d1-5e27-4089-bf58-1f389690d498-kube-api-access-5mxtm\") pod \"redhat-marketplace-2mt29\" (UID: \"75cf71d1-5e27-4089-bf58-1f389690d498\") " pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.044313 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" event={"ID":"37de93ad-331e-41ee-8f74-523100e01b09","Type":"ContainerStarted","Data":"497d3e5714638801946f1bf0cc90a9285aff4aae2f8254e31294466b1d477b9d"} Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.044514 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.048484 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.064541 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-sl2lb" podStartSLOduration=2.064517502 podStartE2EDuration="2.064517502s" podCreationTimestamp="2026-03-20 06:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:56:30.060280618 +0000 UTC m=+422.319591829" watchObservedRunningTime="2026-03-20 06:56:30.064517502 +0000 UTC m=+422.323828653" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.141558 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.402401 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="289bd2af-981a-4da9-af4b-77ef6fd7e526" path="/var/lib/kubelet/pods/289bd2af-981a-4da9-af4b-77ef6fd7e526/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.402891 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5" path="/var/lib/kubelet/pods/301e1f09-ed9b-4d2f-ae95-c098e8ae4dd5/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.403419 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899bb83b-4a95-49e5-8e8f-50c309b5d5e1" path="/var/lib/kubelet/pods/899bb83b-4a95-49e5-8e8f-50c309b5d5e1/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.405089 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a3a1d9c-1870-4a43-95fb-6d07e5619acb" path="/var/lib/kubelet/pods/8a3a1d9c-1870-4a43-95fb-6d07e5619acb/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.405686 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff9e0ea6-add4-4087-83a6-f8d85588d6f2" path="/var/lib/kubelet/pods/ff9e0ea6-add4-4087-83a6-f8d85588d6f2/volumes" Mar 20 06:56:30 crc kubenswrapper[5136]: I0320 06:56:30.574256 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2mt29"] Mar 20 06:56:30 crc kubenswrapper[5136]: W0320 06:56:30.588158 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75cf71d1_5e27_4089_bf58_1f389690d498.slice/crio-9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14 WatchSource:0}: Error finding container 9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14: Status 404 returned error can't find the container with id 9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14 Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.052734 5136 generic.go:334] "Generic (PLEG): container finished" podID="75cf71d1-5e27-4089-bf58-1f389690d498" containerID="a89de7a7de765351b0517f7ba5e755d0c47d37d4861e0172d7a8a1a982e27464" exitCode=0 Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.052852 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerDied","Data":"a89de7a7de765351b0517f7ba5e755d0c47d37d4861e0172d7a8a1a982e27464"} Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.052892 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerStarted","Data":"9276c6d954614cf23e9c04e52e57b2a253765182465a22b173d3ca0eb4de1b14"} Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.231720 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zsmxp"] Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.233222 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.236051 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.245582 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsmxp"] Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.306174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-utilities\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.306226 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-catalog-content\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.306423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rkrd\" (UniqueName: \"kubernetes.io/projected/2d0ba076-45a3-4e99-80de-774db592dfc5-kube-api-access-2rkrd\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.407981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-utilities\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-catalog-content\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rkrd\" (UniqueName: \"kubernetes.io/projected/2d0ba076-45a3-4e99-80de-774db592dfc5-kube-api-access-2rkrd\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408617 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-utilities\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.408870 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2d0ba076-45a3-4e99-80de-774db592dfc5-catalog-content\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.427423 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rkrd\" (UniqueName: \"kubernetes.io/projected/2d0ba076-45a3-4e99-80de-774db592dfc5-kube-api-access-2rkrd\") pod \"redhat-operators-zsmxp\" (UID: \"2d0ba076-45a3-4e99-80de-774db592dfc5\") " pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:31 crc kubenswrapper[5136]: I0320 06:56:31.547019 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.003996 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zsmxp"] Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.060698 5136 generic.go:334] "Generic (PLEG): container finished" podID="75cf71d1-5e27-4089-bf58-1f389690d498" containerID="0dddf0b2897b3d23c02e5ddfb86042209415e1483cbb8aba0921474934ac5aa3" exitCode=0 Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.060765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerDied","Data":"0dddf0b2897b3d23c02e5ddfb86042209415e1483cbb8aba0921474934ac5aa3"} Mar 20 06:56:32 crc kubenswrapper[5136]: W0320 06:56:32.068315 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d0ba076_45a3_4e99_80de_774db592dfc5.slice/crio-6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6 WatchSource:0}: Error finding container 6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6: Status 404 returned error can't find the container with id 6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6 Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.233287 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-598hk"] Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.234303 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.236180 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.243369 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-598hk"] Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.318476 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-catalog-content\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.318741 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-utilities\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.318864 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92nb6\" (UniqueName: \"kubernetes.io/projected/6b1bb4bc-89fb-4965-892b-8db898976bc0-kube-api-access-92nb6\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.419808 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-utilities\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.419882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92nb6\" (UniqueName: \"kubernetes.io/projected/6b1bb4bc-89fb-4965-892b-8db898976bc0-kube-api-access-92nb6\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.419959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-catalog-content\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.420444 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-catalog-content\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.420452 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b1bb4bc-89fb-4965-892b-8db898976bc0-utilities\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.438000 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92nb6\" (UniqueName: \"kubernetes.io/projected/6b1bb4bc-89fb-4965-892b-8db898976bc0-kube-api-access-92nb6\") pod \"certified-operators-598hk\" (UID: \"6b1bb4bc-89fb-4965-892b-8db898976bc0\") " pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.592174 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:32 crc kubenswrapper[5136]: I0320 06:56:32.847747 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-598hk"] Mar 20 06:56:32 crc kubenswrapper[5136]: W0320 06:56:32.855771 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b1bb4bc_89fb_4965_892b_8db898976bc0.slice/crio-c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5 WatchSource:0}: Error finding container c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5: Status 404 returned error can't find the container with id c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5 Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.066096 5136 generic.go:334] "Generic (PLEG): container finished" podID="2d0ba076-45a3-4e99-80de-774db592dfc5" containerID="a5d57dbb13a1f3757cd37f7bc62263340d3ac4a3fbfb8f9cd07f6d492e39d36c" exitCode=0 Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.066150 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerDied","Data":"a5d57dbb13a1f3757cd37f7bc62263340d3ac4a3fbfb8f9cd07f6d492e39d36c"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.066490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerStarted","Data":"6d03f66bcb4fa51f7956526d0ae9669da1cf34482f214d7252c1106f9ad093c6"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.068133 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b1bb4bc-89fb-4965-892b-8db898976bc0" containerID="3046189ba49b0be310b4ce25e92c6fe1a1c7b873323c95bc0bd11b6f05f13f89" exitCode=0 Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.068189 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerDied","Data":"3046189ba49b0be310b4ce25e92c6fe1a1c7b873323c95bc0bd11b6f05f13f89"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.068207 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerStarted","Data":"c19648ccc57110e321d7cbc4973c13b1a90f7580234bc3c58cf75da56b1995c5"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.070202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2mt29" event={"ID":"75cf71d1-5e27-4089-bf58-1f389690d498","Type":"ContainerStarted","Data":"d0f97766edc28a9379c3e8e1b7e5e01489814145224110af63c2cfe44a717a7a"} Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.129688 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2mt29" podStartSLOduration=2.724341967 podStartE2EDuration="4.129669493s" podCreationTimestamp="2026-03-20 06:56:29 +0000 UTC" firstStartedPulling="2026-03-20 06:56:31.054456303 +0000 UTC m=+423.313767464" lastFinishedPulling="2026-03-20 06:56:32.459783819 +0000 UTC m=+424.719094990" observedRunningTime="2026-03-20 06:56:33.126230515 +0000 UTC m=+425.385541696" watchObservedRunningTime="2026-03-20 06:56:33.129669493 +0000 UTC m=+425.388980664" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.625182 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.626694 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.630978 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.638293 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.743296 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.743333 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.743391 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844258 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844290 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.844869 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.845095 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.862425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"community-operators-qfgkr\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:33 crc kubenswrapper[5136]: I0320 06:56:33.972901 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.129702 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b1bb4bc-89fb-4965-892b-8db898976bc0" containerID="4dadae52a17622bdc271e4cd40b7b8d7159ca5417b3f02335d28ed4031d547ac" exitCode=0 Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.129769 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerDied","Data":"4dadae52a17622bdc271e4cd40b7b8d7159ca5417b3f02335d28ed4031d547ac"} Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.139087 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerStarted","Data":"2b7f572170075bb33521c97b19da5ee4153208d19babee031c6a18cb7c553929"} Mar 20 06:56:34 crc kubenswrapper[5136]: I0320 06:56:34.443195 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 06:56:34 crc kubenswrapper[5136]: W0320 06:56:34.513790 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1d2d341_1694_4f55_860a_46b11bac80c8.slice/crio-a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98 WatchSource:0}: Error finding container a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98: Status 404 returned error can't find the container with id a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98 Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.146569 5136 generic.go:334] "Generic (PLEG): container finished" podID="2d0ba076-45a3-4e99-80de-774db592dfc5" containerID="2b7f572170075bb33521c97b19da5ee4153208d19babee031c6a18cb7c553929" exitCode=0 Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.146657 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerDied","Data":"2b7f572170075bb33521c97b19da5ee4153208d19babee031c6a18cb7c553929"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.149206 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-598hk" event={"ID":"6b1bb4bc-89fb-4965-892b-8db898976bc0","Type":"ContainerStarted","Data":"8d80bfba3b0e6cbf81f375aa1672b68e1ada30d3cafa144104d761bf21887186"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.150981 5136 generic.go:334] "Generic (PLEG): container finished" podID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" exitCode=0 Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.151023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.151052 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerStarted","Data":"a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98"} Mar 20 06:56:35 crc kubenswrapper[5136]: I0320 06:56:35.181749 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-598hk" podStartSLOduration=1.7075786549999998 podStartE2EDuration="3.18173008s" podCreationTimestamp="2026-03-20 06:56:32 +0000 UTC" firstStartedPulling="2026-03-20 06:56:33.069696434 +0000 UTC m=+425.329007585" lastFinishedPulling="2026-03-20 06:56:34.543847859 +0000 UTC m=+426.803159010" observedRunningTime="2026-03-20 06:56:35.178503498 +0000 UTC m=+427.437814649" watchObservedRunningTime="2026-03-20 06:56:35.18173008 +0000 UTC m=+427.441041221" Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.168388 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zsmxp" event={"ID":"2d0ba076-45a3-4e99-80de-774db592dfc5","Type":"ContainerStarted","Data":"c9843d8b2bef9509e80941a197c366215ec3619457851006774f5d5e3c7a883a"} Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.171322 5136 generic.go:334] "Generic (PLEG): container finished" podID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" exitCode=0 Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.171379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697"} Mar 20 06:56:36 crc kubenswrapper[5136]: I0320 06:56:36.194525 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zsmxp" podStartSLOduration=2.707804163 podStartE2EDuration="5.194509015s" podCreationTimestamp="2026-03-20 06:56:31 +0000 UTC" firstStartedPulling="2026-03-20 06:56:33.067233136 +0000 UTC m=+425.326544287" lastFinishedPulling="2026-03-20 06:56:35.553937968 +0000 UTC m=+427.813249139" observedRunningTime="2026-03-20 06:56:36.192273373 +0000 UTC m=+428.451584524" watchObservedRunningTime="2026-03-20 06:56:36.194509015 +0000 UTC m=+428.453820166" Mar 20 06:56:37 crc kubenswrapper[5136]: I0320 06:56:37.178403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerStarted","Data":"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871"} Mar 20 06:56:37 crc kubenswrapper[5136]: I0320 06:56:37.199408 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qfgkr" podStartSLOduration=2.779295786 podStartE2EDuration="4.199391339s" podCreationTimestamp="2026-03-20 06:56:33 +0000 UTC" firstStartedPulling="2026-03-20 06:56:35.152806885 +0000 UTC m=+427.412118056" lastFinishedPulling="2026-03-20 06:56:36.572902458 +0000 UTC m=+428.832213609" observedRunningTime="2026-03-20 06:56:37.195887698 +0000 UTC m=+429.455198849" watchObservedRunningTime="2026-03-20 06:56:37.199391339 +0000 UTC m=+429.458702510" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.142224 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.142564 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.185046 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:40 crc kubenswrapper[5136]: I0320 06:56:40.231013 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2mt29" Mar 20 06:56:41 crc kubenswrapper[5136]: I0320 06:56:41.548010 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:41 crc kubenswrapper[5136]: I0320 06:56:41.548055 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.592407 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.592725 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.603281 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zsmxp" podUID="2d0ba076-45a3-4e99-80de-774db592dfc5" containerName="registry-server" probeResult="failure" output=< Mar 20 06:56:42 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 06:56:42 crc kubenswrapper[5136]: > Mar 20 06:56:42 crc kubenswrapper[5136]: I0320 06:56:42.627861 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:43 crc kubenswrapper[5136]: I0320 06:56:43.247804 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-598hk" Mar 20 06:56:43 crc kubenswrapper[5136]: I0320 06:56:43.974046 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:43 crc kubenswrapper[5136]: I0320 06:56:43.974125 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:44 crc kubenswrapper[5136]: I0320 06:56:44.009198 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:44 crc kubenswrapper[5136]: I0320 06:56:44.257686 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.307115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8s5gx"] Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.307947 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.325712 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8s5gx"] Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505684 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-trusted-ca\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505724 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505745 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-registry-certificates\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505771 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/31623b1c-0c30-4654-99eb-3919c754586a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqgk5\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-kube-api-access-kqgk5\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.505984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-bound-sa-token\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.506052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-registry-tls\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.506076 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/31623b1c-0c30-4654-99eb-3919c754586a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.530144 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-bound-sa-token\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607697 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-registry-tls\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607735 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/31623b1c-0c30-4654-99eb-3919c754586a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607914 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-trusted-ca\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.607958 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-registry-certificates\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.608000 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/31623b1c-0c30-4654-99eb-3919c754586a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.608041 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqgk5\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-kube-api-access-kqgk5\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.609269 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/31623b1c-0c30-4654-99eb-3919c754586a-ca-trust-extracted\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.609762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-trusted-ca\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.610176 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/31623b1c-0c30-4654-99eb-3919c754586a-registry-certificates\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.613835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-registry-tls\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.614033 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/31623b1c-0c30-4654-99eb-3919c754586a-installation-pull-secrets\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.629523 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqgk5\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-kube-api-access-kqgk5\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.629916 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/31623b1c-0c30-4654-99eb-3919c754586a-bound-sa-token\") pod \"image-registry-66df7c8f76-8s5gx\" (UID: \"31623b1c-0c30-4654-99eb-3919c754586a\") " pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:47 crc kubenswrapper[5136]: I0320 06:56:47.925171 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:48 crc kubenswrapper[5136]: I0320 06:56:48.378888 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-8s5gx"] Mar 20 06:56:48 crc kubenswrapper[5136]: W0320 06:56:48.384926 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31623b1c_0c30_4654_99eb_3919c754586a.slice/crio-eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145 WatchSource:0}: Error finding container eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145: Status 404 returned error can't find the container with id eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145 Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.239415 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" event={"ID":"31623b1c-0c30-4654-99eb-3919c754586a","Type":"ContainerStarted","Data":"77fa2c2d30ffa27d0374c901c22e2e5cd184dfb4706fca48a4c737680738c21e"} Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.239467 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" event={"ID":"31623b1c-0c30-4654-99eb-3919c754586a","Type":"ContainerStarted","Data":"eac31a64abff47046c960020b5922875604071a6549de598031c3bba01676145"} Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.239719 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:56:49 crc kubenswrapper[5136]: I0320 06:56:49.271839 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" podStartSLOduration=2.271780141 podStartE2EDuration="2.271780141s" podCreationTimestamp="2026-03-20 06:56:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 06:56:49.263904352 +0000 UTC m=+441.523215503" watchObservedRunningTime="2026-03-20 06:56:49.271780141 +0000 UTC m=+441.531091302" Mar 20 06:56:51 crc kubenswrapper[5136]: I0320 06:56:51.593992 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:56:51 crc kubenswrapper[5136]: I0320 06:56:51.639784 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zsmxp" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.574421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.575145 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.577897 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.585865 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:05 crc kubenswrapper[5136]: I0320 06:57:05.796757 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.357711 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"fd9f1a089c8d6f7ac25b5f394b4badef16e4b89051cb170c474d913e5d4f4145"} Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.358301 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8f64215c02d0190dc726f7bd61c5372cc40fcf94cfa7eebbe4909a83bdec1832"} Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.485774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.485845 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.490928 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.491070 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.497234 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Mar 20 06:57:06 crc kubenswrapper[5136]: I0320 06:57:06.601436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:06 crc kubenswrapper[5136]: W0320 06:57:06.794043 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090 WatchSource:0}: Error finding container 44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090: Status 404 returned error can't find the container with id 44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090 Mar 20 06:57:06 crc kubenswrapper[5136]: W0320 06:57:06.945664 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537 WatchSource:0}: Error finding container fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537: Status 404 returned error can't find the container with id fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537 Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.364838 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"18aee57588522e6b9ee43fb6a19224614bfb0d260631248a7e48434f99a252a9"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.365251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"fb32a5ab42245e1eb6d3b378c3ed82becd1a675731b0e6b4d0c133d4c4cf9537"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.366212 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"fd812e5f2b792b8d74b1c711bc124896bb0d6d54b891e3b006264acc68868e41"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.366234 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"44d6f20d22e242524d7127dfd8438e337de95cdb2421ae0a6e82b2b86757d090"} Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.366415 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.929855 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-8s5gx" Mar 20 06:57:07 crc kubenswrapper[5136]: I0320 06:57:07.993791 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:57:15 crc kubenswrapper[5136]: I0320 06:57:15.822584 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:57:15 crc kubenswrapper[5136]: I0320 06:57:15.823098 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.041599 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" containerID="cri-o://6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" gracePeriod=30 Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.487357 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538571 5136 generic.go:334] "Generic (PLEG): container finished" podID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" exitCode=0 Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538630 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538634 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerDied","Data":"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f"} Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538860 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-fk4pl" event={"ID":"16ee2b48-5dea-48c6-888a-ae52ff44afa4","Type":"ContainerDied","Data":"2150aeb737b2b0e0b3d8cc086d915c665b772b5d543fe941e2ca6a177c62b012"} Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.538913 5136 scope.go:117] "RemoveContainer" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.556498 5136 scope.go:117] "RemoveContainer" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" Mar 20 06:57:33 crc kubenswrapper[5136]: E0320 06:57:33.557079 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f\": container with ID starting with 6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f not found: ID does not exist" containerID="6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.557139 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f"} err="failed to get container status \"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f\": rpc error: code = NotFound desc = could not find container \"6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f\": container with ID starting with 6c63fda8ff6ae62b077f4f27f5dde80e5a28a6eac953ffd0077b516fdb381d9f not found: ID does not exist" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663664 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663753 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663869 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.663910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664064 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664176 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664215 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.664251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") pod \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\" (UID: \"16ee2b48-5dea-48c6-888a-ae52ff44afa4\") " Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.666086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.666506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.672271 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m" (OuterVolumeSpecName: "kube-api-access-kl98m") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "kube-api-access-kl98m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.672490 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.672861 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.676639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.688596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.698222 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "16ee2b48-5dea-48c6-888a-ae52ff44afa4" (UID: "16ee2b48-5dea-48c6-888a-ae52ff44afa4"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765696 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kl98m\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-kube-api-access-kl98m\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765734 5136 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/16ee2b48-5dea-48c6-888a-ae52ff44afa4-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765746 5136 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-certificates\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765758 5136 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/16ee2b48-5dea-48c6-888a-ae52ff44afa4-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765769 5136 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-bound-sa-token\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765781 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16ee2b48-5dea-48c6-888a-ae52ff44afa4-trusted-ca\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.765792 5136 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/16ee2b48-5dea-48c6-888a-ae52ff44afa4-registry-tls\") on node \"crc\" DevicePath \"\"" Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.873118 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:57:33 crc kubenswrapper[5136]: I0320 06:57:33.877702 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-fk4pl"] Mar 20 06:57:34 crc kubenswrapper[5136]: I0320 06:57:34.408846 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" path="/var/lib/kubelet/pods/16ee2b48-5dea-48c6-888a-ae52ff44afa4/volumes" Mar 20 06:57:36 crc kubenswrapper[5136]: I0320 06:57:36.607104 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Mar 20 06:57:45 crc kubenswrapper[5136]: I0320 06:57:45.861170 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:57:45 crc kubenswrapper[5136]: I0320 06:57:45.862136 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.150568 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 06:58:00 crc kubenswrapper[5136]: E0320 06:58:00.151884 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.151918 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.152180 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ee2b48-5dea-48c6-888a-ae52ff44afa4" containerName="registry" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.152990 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.159153 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.159528 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.160079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.163972 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.347339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"auto-csr-approver-29566498-pc964\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.449270 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"auto-csr-approver-29566498-pc964\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.485342 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"auto-csr-approver-29566498-pc964\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.489986 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:00 crc kubenswrapper[5136]: I0320 06:58:00.761519 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 06:58:01 crc kubenswrapper[5136]: I0320 06:58:01.749273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566498-pc964" event={"ID":"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d","Type":"ContainerStarted","Data":"18b14d417d86bb1444abe37f82fd2d88b81fafc542a3b9491fd4c2419eda43db"} Mar 20 06:58:02 crc kubenswrapper[5136]: I0320 06:58:02.756551 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerID="c00038ddb8710afcc46ffbe4488fc8393e21c46c51a56a16f3faef658211be51" exitCode=0 Mar 20 06:58:02 crc kubenswrapper[5136]: I0320 06:58:02.756610 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566498-pc964" event={"ID":"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d","Type":"ContainerDied","Data":"c00038ddb8710afcc46ffbe4488fc8393e21c46c51a56a16f3faef658211be51"} Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.085090 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.100542 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") pod \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\" (UID: \"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d\") " Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.109782 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6" (OuterVolumeSpecName: "kube-api-access-wvmv6") pod "96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" (UID: "96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d"). InnerVolumeSpecName "kube-api-access-wvmv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.201641 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvmv6\" (UniqueName: \"kubernetes.io/projected/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d-kube-api-access-wvmv6\") on node \"crc\" DevicePath \"\"" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.771846 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566498-pc964" event={"ID":"96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d","Type":"ContainerDied","Data":"18b14d417d86bb1444abe37f82fd2d88b81fafc542a3b9491fd4c2419eda43db"} Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.771885 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18b14d417d86bb1444abe37f82fd2d88b81fafc542a3b9491fd4c2419eda43db" Mar 20 06:58:04 crc kubenswrapper[5136]: I0320 06:58:04.771940 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566498-pc964" Mar 20 06:58:05 crc kubenswrapper[5136]: I0320 06:58:05.159917 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:58:05 crc kubenswrapper[5136]: I0320 06:58:05.166348 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566492-9gbqz"] Mar 20 06:58:06 crc kubenswrapper[5136]: I0320 06:58:06.408344 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="760c854a-7b9d-4582-9bcc-faf077008e0f" path="/var/lib/kubelet/pods/760c854a-7b9d-4582-9bcc-faf077008e0f/volumes" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.821620 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.822366 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.822470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.823368 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 06:58:15 crc kubenswrapper[5136]: I0320 06:58:15.823470 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13" gracePeriod=600 Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875118 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13" exitCode=0 Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13"} Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875952 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a"} Mar 20 06:58:16 crc kubenswrapper[5136]: I0320 06:58:16.875988 5136 scope.go:117] "RemoveContainer" containerID="f2470b9608ef78e15d9f0dbff949116078d862a69384d22e021abfcdff3bcda9" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.151752 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:00:00 crc kubenswrapper[5136]: E0320 07:00:00.152730 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerName="oc" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.152750 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerName="oc" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.152928 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" containerName="oc" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.153452 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.155782 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.156095 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.188359 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.189380 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.189408 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.189516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.191510 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"auto-csr-approver-29566500-wd9ph\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.192102 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.193104 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.193407 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293043 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"auto-csr-approver-29566500-wd9ph\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293092 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293147 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.293176 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.310769 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"auto-csr-approver-29566500-wd9ph\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.394485 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.394547 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.394588 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.395594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.398040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.411088 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"collect-profiles-29566500-ljqvj\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.522230 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.537061 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.713583 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.744564 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:00:00 crc kubenswrapper[5136]: W0320 07:00:00.746330 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdbefeb1_6fcf_4868_a30e_9fc5a016daf9.slice/crio-6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92 WatchSource:0}: Error finding container 6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92: Status 404 returned error can't find the container with id 6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92 Mar 20 07:00:00 crc kubenswrapper[5136]: I0320 07:00:00.748299 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.374059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" event={"ID":"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9","Type":"ContainerStarted","Data":"6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92"} Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.375446 5136 generic.go:334] "Generic (PLEG): container finished" podID="cd400575-ef96-4721-b617-29c85991f7f0" containerID="3ae7890d536278f5580d52b91ca1ce94c8e1b0783ea4d154db2f9c059b03bba9" exitCode=0 Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.375488 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" event={"ID":"cd400575-ef96-4721-b617-29c85991f7f0","Type":"ContainerDied","Data":"3ae7890d536278f5580d52b91ca1ce94c8e1b0783ea4d154db2f9c059b03bba9"} Mar 20 07:00:01 crc kubenswrapper[5136]: I0320 07:00:01.375518 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" event={"ID":"cd400575-ef96-4721-b617-29c85991f7f0","Type":"ContainerStarted","Data":"5bd48978a01a6a95902dab2c80c81f1c5d49c46cbbb607a25b21662394b4b67c"} Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.595078 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.744480 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") pod \"cd400575-ef96-4721-b617-29c85991f7f0\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.744540 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") pod \"cd400575-ef96-4721-b617-29c85991f7f0\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.744594 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") pod \"cd400575-ef96-4721-b617-29c85991f7f0\" (UID: \"cd400575-ef96-4721-b617-29c85991f7f0\") " Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.745651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd400575-ef96-4721-b617-29c85991f7f0" (UID: "cd400575-ef96-4721-b617-29c85991f7f0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.749608 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx" (OuterVolumeSpecName: "kube-api-access-dzcjx") pod "cd400575-ef96-4721-b617-29c85991f7f0" (UID: "cd400575-ef96-4721-b617-29c85991f7f0"). InnerVolumeSpecName "kube-api-access-dzcjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.749825 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd400575-ef96-4721-b617-29c85991f7f0" (UID: "cd400575-ef96-4721-b617-29c85991f7f0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.846099 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd400575-ef96-4721-b617-29c85991f7f0-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.846519 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzcjx\" (UniqueName: \"kubernetes.io/projected/cd400575-ef96-4721-b617-29c85991f7f0-kube-api-access-dzcjx\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:02 crc kubenswrapper[5136]: I0320 07:00:02.846536 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd400575-ef96-4721-b617-29c85991f7f0-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:03 crc kubenswrapper[5136]: I0320 07:00:03.388348 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" event={"ID":"cd400575-ef96-4721-b617-29c85991f7f0","Type":"ContainerDied","Data":"5bd48978a01a6a95902dab2c80c81f1c5d49c46cbbb607a25b21662394b4b67c"} Mar 20 07:00:03 crc kubenswrapper[5136]: I0320 07:00:03.388391 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bd48978a01a6a95902dab2c80c81f1c5d49c46cbbb607a25b21662394b4b67c" Mar 20 07:00:03 crc kubenswrapper[5136]: I0320 07:00:03.388391 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj" Mar 20 07:00:20 crc kubenswrapper[5136]: I0320 07:00:20.490609 5136 generic.go:334] "Generic (PLEG): container finished" podID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerID="c700468779627f6961723a07d9133659d892564be897053e621e205bd14c1cbb" exitCode=0 Mar 20 07:00:20 crc kubenswrapper[5136]: I0320 07:00:20.490694 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" event={"ID":"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9","Type":"ContainerDied","Data":"c700468779627f6961723a07d9133659d892564be897053e621e205bd14c1cbb"} Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.743433 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.866598 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") pod \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\" (UID: \"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9\") " Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.874368 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln" (OuterVolumeSpecName: "kube-api-access-vvcln") pod "bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" (UID: "bdbefeb1-6fcf-4868-a30e-9fc5a016daf9"). InnerVolumeSpecName "kube-api-access-vvcln". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:00:21 crc kubenswrapper[5136]: I0320 07:00:21.968589 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvcln\" (UniqueName: \"kubernetes.io/projected/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9-kube-api-access-vvcln\") on node \"crc\" DevicePath \"\"" Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.512505 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" event={"ID":"bdbefeb1-6fcf-4868-a30e-9fc5a016daf9","Type":"ContainerDied","Data":"6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92"} Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.512874 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f05b6face7214db5c8aac57e7e559a2b93d05033e1d01f4917effb8c468fe92" Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.512967 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566500-wd9ph" Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.811831 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 07:00:22 crc kubenswrapper[5136]: I0320 07:00:22.815376 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566494-v7mrb"] Mar 20 07:00:24 crc kubenswrapper[5136]: I0320 07:00:24.405404 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="793ba114-16f6-4ad2-bc47-daee6a819a00" path="/var/lib/kubelet/pods/793ba114-16f6-4ad2-bc47-daee6a819a00/volumes" Mar 20 07:00:28 crc kubenswrapper[5136]: I0320 07:00:28.683768 5136 scope.go:117] "RemoveContainer" containerID="bcfaf55d80db554feaeb3774c52e47f9f050ba6262ba6f06d4b21a8da6ad81d5" Mar 20 07:00:28 crc kubenswrapper[5136]: I0320 07:00:28.741003 5136 scope.go:117] "RemoveContainer" containerID="0fb2ad703d57143cc353d8d80b57c7e9e8375a9fe5053d2f01f256b52277bbbb" Mar 20 07:00:28 crc kubenswrapper[5136]: I0320 07:00:28.766564 5136 scope.go:117] "RemoveContainer" containerID="27efcdb323bdc3f56a43d4c3d542dd5fa1e9865775e2f1ad9ed6c1b623aed3e5" Mar 20 07:00:45 crc kubenswrapper[5136]: I0320 07:00:45.821777 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:00:45 crc kubenswrapper[5136]: I0320 07:00:45.822399 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:01:15 crc kubenswrapper[5136]: I0320 07:01:15.821661 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:01:15 crc kubenswrapper[5136]: I0320 07:01:15.824352 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.821565 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.822161 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.822222 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.823017 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:01:45 crc kubenswrapper[5136]: I0320 07:01:45.823109 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a" gracePeriod=600 Mar 20 07:01:46 crc kubenswrapper[5136]: I0320 07:01:46.049474 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a" exitCode=0 Mar 20 07:01:46 crc kubenswrapper[5136]: I0320 07:01:46.049498 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a"} Mar 20 07:01:46 crc kubenswrapper[5136]: I0320 07:01:46.049817 5136 scope.go:117] "RemoveContainer" containerID="75f86a961b5495e1a65ce80a7e52156279382dd73f0b9cba9f06cd8c4be35b13" Mar 20 07:01:47 crc kubenswrapper[5136]: I0320 07:01:47.059341 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52"} Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.150583 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:02:00 crc kubenswrapper[5136]: E0320 07:02:00.151630 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerName="oc" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151655 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerName="oc" Mar 20 07:02:00 crc kubenswrapper[5136]: E0320 07:02:00.151679 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd400575-ef96-4721-b617-29c85991f7f0" containerName="collect-profiles" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151693 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd400575-ef96-4721-b617-29c85991f7f0" containerName="collect-profiles" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151951 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" containerName="oc" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.151990 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd400575-ef96-4721-b617-29c85991f7f0" containerName="collect-profiles" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.152642 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.158484 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.158878 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.159672 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.170108 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.269531 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"auto-csr-approver-29566502-5gzjz\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.371262 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"auto-csr-approver-29566502-5gzjz\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.393635 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"auto-csr-approver-29566502-5gzjz\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.495145 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:00 crc kubenswrapper[5136]: I0320 07:02:00.914923 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:02:01 crc kubenswrapper[5136]: I0320 07:02:01.152465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerStarted","Data":"570656c1647fcfacbf8c2593e3a4ddcb92b1cbe326de3030885c15ece21a5150"} Mar 20 07:02:02 crc kubenswrapper[5136]: I0320 07:02:02.159644 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerStarted","Data":"ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb"} Mar 20 07:02:02 crc kubenswrapper[5136]: I0320 07:02:02.170933 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" podStartSLOduration=1.352445003 podStartE2EDuration="2.170911939s" podCreationTimestamp="2026-03-20 07:02:00 +0000 UTC" firstStartedPulling="2026-03-20 07:02:00.923930629 +0000 UTC m=+753.183241790" lastFinishedPulling="2026-03-20 07:02:01.742397575 +0000 UTC m=+754.001708726" observedRunningTime="2026-03-20 07:02:02.170360241 +0000 UTC m=+754.429671392" watchObservedRunningTime="2026-03-20 07:02:02.170911939 +0000 UTC m=+754.430223110" Mar 20 07:02:03 crc kubenswrapper[5136]: I0320 07:02:03.170314 5136 generic.go:334] "Generic (PLEG): container finished" podID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerID="ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb" exitCode=0 Mar 20 07:02:03 crc kubenswrapper[5136]: I0320 07:02:03.170439 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerDied","Data":"ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb"} Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.400543 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.563216 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") pod \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\" (UID: \"a7239b4f-11f6-4f5c-8d78-c233e33b8a79\") " Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.571905 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt" (OuterVolumeSpecName: "kube-api-access-r82mt") pod "a7239b4f-11f6-4f5c-8d78-c233e33b8a79" (UID: "a7239b4f-11f6-4f5c-8d78-c233e33b8a79"). InnerVolumeSpecName "kube-api-access-r82mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:02:04 crc kubenswrapper[5136]: I0320 07:02:04.665328 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r82mt\" (UniqueName: \"kubernetes.io/projected/a7239b4f-11f6-4f5c-8d78-c233e33b8a79-kube-api-access-r82mt\") on node \"crc\" DevicePath \"\"" Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.186518 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" event={"ID":"a7239b4f-11f6-4f5c-8d78-c233e33b8a79","Type":"ContainerDied","Data":"570656c1647fcfacbf8c2593e3a4ddcb92b1cbe326de3030885c15ece21a5150"} Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.186572 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="570656c1647fcfacbf8c2593e3a4ddcb92b1cbe326de3030885c15ece21a5150" Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.186601 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566502-5gzjz" Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.242741 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 07:02:05 crc kubenswrapper[5136]: I0320 07:02:05.250685 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566496-kkbk6"] Mar 20 07:02:06 crc kubenswrapper[5136]: I0320 07:02:06.403090 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb" path="/var/lib/kubelet/pods/7123c3cf-7f09-4f1f-a99f-b5a3a27c54eb/volumes" Mar 20 07:02:28 crc kubenswrapper[5136]: I0320 07:02:28.849251 5136 scope.go:117] "RemoveContainer" containerID="65e785eb1dbd67ac0fede3f7a5dc27c137ef39fb9832a3755e23a954eb908065" Mar 20 07:03:33 crc kubenswrapper[5136]: I0320 07:03:33.897916 5136 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.148137 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:04:00 crc kubenswrapper[5136]: E0320 07:04:00.149055 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerName="oc" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.149077 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerName="oc" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.149243 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" containerName="oc" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.149787 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.153184 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.154091 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.154331 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.157436 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.321052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"auto-csr-approver-29566504-fnsrq\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.422785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"auto-csr-approver-29566504-fnsrq\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.442588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"auto-csr-approver-29566504-fnsrq\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.475328 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:00 crc kubenswrapper[5136]: I0320 07:04:00.715236 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.486629 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" event={"ID":"f8e1a6ad-3e5f-4a83-b429-d132710b8146","Type":"ContainerStarted","Data":"f44bc06f78feac44286461e41ae87086d5e10b233578d1bb54fbda0fb313e9f8"} Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.834900 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.836361 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.852363 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.852438 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.852466 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.866150 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.953738 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.953841 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.953864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.954287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.954488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:01 crc kubenswrapper[5136]: I0320 07:04:01.986167 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"redhat-operators-bhgm2\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.162130 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.365899 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.494535 5136 generic.go:334] "Generic (PLEG): container finished" podID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerID="a9c6142c6c3be406a353a6109a8cb8b7b38a7799c67785c8207003ce9a223a42" exitCode=0 Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.494623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" event={"ID":"f8e1a6ad-3e5f-4a83-b429-d132710b8146","Type":"ContainerDied","Data":"a9c6142c6c3be406a353a6109a8cb8b7b38a7799c67785c8207003ce9a223a42"} Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.496597 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerStarted","Data":"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba"} Mar 20 07:04:02 crc kubenswrapper[5136]: I0320 07:04:02.496646 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerStarted","Data":"eca769e7e0fbb0a2c72c3134d942fe624852d1d7e3b55c96cfb91d66352d23e1"} Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.504088 5136 generic.go:334] "Generic (PLEG): container finished" podID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" exitCode=0 Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.504228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba"} Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.761971 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.777880 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") pod \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\" (UID: \"f8e1a6ad-3e5f-4a83-b429-d132710b8146\") " Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.809334 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx" (OuterVolumeSpecName: "kube-api-access-z69tx") pod "f8e1a6ad-3e5f-4a83-b429-d132710b8146" (UID: "f8e1a6ad-3e5f-4a83-b429-d132710b8146"). InnerVolumeSpecName "kube-api-access-z69tx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:04:03 crc kubenswrapper[5136]: I0320 07:04:03.879640 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z69tx\" (UniqueName: \"kubernetes.io/projected/f8e1a6ad-3e5f-4a83-b429-d132710b8146-kube-api-access-z69tx\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.516379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" event={"ID":"f8e1a6ad-3e5f-4a83-b429-d132710b8146","Type":"ContainerDied","Data":"f44bc06f78feac44286461e41ae87086d5e10b233578d1bb54fbda0fb313e9f8"} Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.516421 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f44bc06f78feac44286461e41ae87086d5e10b233578d1bb54fbda0fb313e9f8" Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.516445 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566504-fnsrq" Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.840174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 07:04:04 crc kubenswrapper[5136]: I0320 07:04:04.848273 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566498-pc964"] Mar 20 07:04:05 crc kubenswrapper[5136]: I0320 07:04:05.522441 5136 generic.go:334] "Generic (PLEG): container finished" podID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" exitCode=0 Mar 20 07:04:05 crc kubenswrapper[5136]: I0320 07:04:05.522489 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748"} Mar 20 07:04:06 crc kubenswrapper[5136]: I0320 07:04:06.405262 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d" path="/var/lib/kubelet/pods/96b97a95-a6f3-47fa-8fe3-c8b77bc7a22d/volumes" Mar 20 07:04:06 crc kubenswrapper[5136]: I0320 07:04:06.530897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerStarted","Data":"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3"} Mar 20 07:04:06 crc kubenswrapper[5136]: I0320 07:04:06.560728 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-bhgm2" podStartSLOduration=3.006351796 podStartE2EDuration="5.560693955s" podCreationTimestamp="2026-03-20 07:04:01 +0000 UTC" firstStartedPulling="2026-03-20 07:04:03.506512162 +0000 UTC m=+875.765823313" lastFinishedPulling="2026-03-20 07:04:06.060854321 +0000 UTC m=+878.320165472" observedRunningTime="2026-03-20 07:04:06.5541611 +0000 UTC m=+878.813472291" watchObservedRunningTime="2026-03-20 07:04:06.560693955 +0000 UTC m=+878.820005166" Mar 20 07:04:12 crc kubenswrapper[5136]: I0320 07:04:12.163075 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:12 crc kubenswrapper[5136]: I0320 07:04:12.164760 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:13 crc kubenswrapper[5136]: I0320 07:04:13.204054 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-bhgm2" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" probeResult="failure" output=< Mar 20 07:04:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:04:13 crc kubenswrapper[5136]: > Mar 20 07:04:15 crc kubenswrapper[5136]: I0320 07:04:15.821490 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:04:15 crc kubenswrapper[5136]: I0320 07:04:15.821551 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:04:22 crc kubenswrapper[5136]: I0320 07:04:22.232603 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:22 crc kubenswrapper[5136]: I0320 07:04:22.274680 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:22 crc kubenswrapper[5136]: I0320 07:04:22.471459 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:23 crc kubenswrapper[5136]: I0320 07:04:23.627867 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-bhgm2" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" containerID="cri-o://f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" gracePeriod=2 Mar 20 07:04:23 crc kubenswrapper[5136]: I0320 07:04:23.961120 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.052270 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") pod \"c199c8cd-de7b-4743-9ce7-786a33ff47da\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.052393 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") pod \"c199c8cd-de7b-4743-9ce7-786a33ff47da\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.052456 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") pod \"c199c8cd-de7b-4743-9ce7-786a33ff47da\" (UID: \"c199c8cd-de7b-4743-9ce7-786a33ff47da\") " Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.053169 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities" (OuterVolumeSpecName: "utilities") pod "c199c8cd-de7b-4743-9ce7-786a33ff47da" (UID: "c199c8cd-de7b-4743-9ce7-786a33ff47da"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.063039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc" (OuterVolumeSpecName: "kube-api-access-gprrc") pod "c199c8cd-de7b-4743-9ce7-786a33ff47da" (UID: "c199c8cd-de7b-4743-9ce7-786a33ff47da"). InnerVolumeSpecName "kube-api-access-gprrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.154344 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.154387 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gprrc\" (UniqueName: \"kubernetes.io/projected/c199c8cd-de7b-4743-9ce7-786a33ff47da-kube-api-access-gprrc\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.211323 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c199c8cd-de7b-4743-9ce7-786a33ff47da" (UID: "c199c8cd-de7b-4743-9ce7-786a33ff47da"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.255723 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c199c8cd-de7b-4743-9ce7-786a33ff47da-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634496 5136 generic.go:334] "Generic (PLEG): container finished" podID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" exitCode=0 Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634535 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-bhgm2" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3"} Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634614 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-bhgm2" event={"ID":"c199c8cd-de7b-4743-9ce7-786a33ff47da","Type":"ContainerDied","Data":"eca769e7e0fbb0a2c72c3134d942fe624852d1d7e3b55c96cfb91d66352d23e1"} Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.634641 5136 scope.go:117] "RemoveContainer" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.656533 5136 scope.go:117] "RemoveContainer" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.657611 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.669325 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-bhgm2"] Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.682531 5136 scope.go:117] "RemoveContainer" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.699505 5136 scope.go:117] "RemoveContainer" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" Mar 20 07:04:24 crc kubenswrapper[5136]: E0320 07:04:24.700017 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3\": container with ID starting with f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3 not found: ID does not exist" containerID="f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700083 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3"} err="failed to get container status \"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3\": rpc error: code = NotFound desc = could not find container \"f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3\": container with ID starting with f423cb451ecdbaecb5d105d305d33f81a70ce62187f671dd6ebb2e372237e3b3 not found: ID does not exist" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700118 5136 scope.go:117] "RemoveContainer" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" Mar 20 07:04:24 crc kubenswrapper[5136]: E0320 07:04:24.700523 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748\": container with ID starting with 3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748 not found: ID does not exist" containerID="3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700559 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748"} err="failed to get container status \"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748\": rpc error: code = NotFound desc = could not find container \"3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748\": container with ID starting with 3244ec9ab9141c687a5d058b4c90833afc7130823ff5a1d96f74d49ce15f3748 not found: ID does not exist" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.700602 5136 scope.go:117] "RemoveContainer" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" Mar 20 07:04:24 crc kubenswrapper[5136]: E0320 07:04:24.700961 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba\": container with ID starting with 7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba not found: ID does not exist" containerID="7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba" Mar 20 07:04:24 crc kubenswrapper[5136]: I0320 07:04:24.701001 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba"} err="failed to get container status \"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba\": rpc error: code = NotFound desc = could not find container \"7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba\": container with ID starting with 7551e9b913392ddacf772d1ef5e5d53de1a9182e9a2fafde8ffc8343cb303fba not found: ID does not exist" Mar 20 07:04:26 crc kubenswrapper[5136]: I0320 07:04:26.407282 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" path="/var/lib/kubelet/pods/c199c8cd-de7b-4743-9ce7-786a33ff47da/volumes" Mar 20 07:04:28 crc kubenswrapper[5136]: I0320 07:04:28.929642 5136 scope.go:117] "RemoveContainer" containerID="c00038ddb8710afcc46ffbe4488fc8393e21c46c51a56a16f3faef658211be51" Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.683185 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684460 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" containerID="cri-o://47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684626 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" containerID="cri-o://84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684748 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" containerID="cri-o://09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684684 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" containerID="cri-o://f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684644 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" containerID="cri-o://2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684864 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" containerID="cri-o://0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.684630 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" gracePeriod=30 Mar 20 07:04:33 crc kubenswrapper[5136]: I0320 07:04:33.733027 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" containerID="cri-o://ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" gracePeriod=30 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.018021 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.020953 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-acl-logging/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.022115 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-controller/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.022847 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072654 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-mpvnm"] Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072856 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-content" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072871 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-content" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072881 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kubecfg-setup" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072888 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kubecfg-setup" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072898 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072904 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072911 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072916 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072923 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072929 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072940 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072946 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072953 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072959 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072966 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072972 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072979 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.072986 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.072994 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073000 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073008 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073014 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073021 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073027 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073035 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerName="oc" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073041 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerName="oc" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073054 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073062 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073067 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073076 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-utilities" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073083 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="extract-utilities" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.073091 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073096 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073177 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" containerName="oc" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073194 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="nbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073204 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-ovn-metrics" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073212 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073220 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="kube-rbac-proxy-node" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073226 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="northd" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073232 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073239 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c199c8cd-de7b-4743-9ce7-786a33ff47da" containerName="registry-server" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073244 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="sbdb" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073251 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073257 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovn-acl-logging" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073264 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.073412 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerName="ovnkube-controller" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.074830 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125502 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125513 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125532 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125579 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125625 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125650 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125658 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125676 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125687 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125704 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125728 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125781 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125806 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125892 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.125981 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") pod \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\" (UID: \"963bf1ca-b871-4cad-a1fc-cf829a70a81a\") " Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126241 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126288 5136 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126299 5136 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126309 5136 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-systemd-units\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126318 5136 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-netd\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126327 5136 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126337 5136 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-cni-bin\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126346 5136 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126354 5136 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126482 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126533 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126775 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126830 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log" (OuterVolumeSpecName: "node-log") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126839 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash" (OuterVolumeSpecName: "host-slash") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126861 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126929 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket" (OuterVolumeSpecName: "log-socket") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.126870 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.132317 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.132355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr" (OuterVolumeSpecName: "kube-api-access-nrnqr") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "kube-api-access-nrnqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.138979 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "963bf1ca-b871-4cad-a1fc-cf829a70a81a" (UID: "963bf1ca-b871-4cad-a1fc-cf829a70a81a"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227040 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-node-log\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227110 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-systemd-units\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-kubelet\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227328 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-script-lib\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227449 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-bin\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227501 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-etc-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.227539 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-var-lib-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-log-socket\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228158 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-systemd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-env-overrides\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-netns\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228484 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-config\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228556 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-netd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228614 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228711 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228806 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-ovn\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-slash\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228917 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228940 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/535b87fd-9e45-4845-8569-975e6c108579-ovn-node-metrics-cert\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.228964 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2c7k\" (UniqueName: \"kubernetes.io/projected/535b87fd-9e45-4845-8569-975e6c108579-kube-api-access-r2c7k\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229120 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrnqr\" (UniqueName: \"kubernetes.io/projected/963bf1ca-b871-4cad-a1fc-cf829a70a81a-kube-api-access-nrnqr\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229143 5136 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-log-socket\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229157 5136 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-node-log\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229169 5136 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-run-netns\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229182 5136 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229195 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229209 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229221 5136 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-slash\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229232 5136 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-env-overrides\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229243 5136 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-host-kubelet\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229254 5136 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/963bf1ca-b871-4cad-a1fc-cf829a70a81a-run-systemd\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.229265 5136 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/963bf1ca-b871-4cad-a1fc-cf829a70a81a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330130 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-node-log\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330197 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-systemd-units\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330236 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-kubelet\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330248 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-node-log\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330264 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-script-lib\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-systemd-units\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330325 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-bin\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-etc-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330371 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-var-lib-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330398 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-log-socket\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330409 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-etc-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330399 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-kubelet\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330423 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-systemd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330483 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-bin\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-var-lib-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-env-overrides\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330574 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-systemd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330661 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-log-socket\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-netns\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-config\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330902 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-netd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330905 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-netns\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330953 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-cni-netd\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.330961 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331001 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-run-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331013 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331057 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-ovn\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331119 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/535b87fd-9e45-4845-8569-975e6c108579-ovn-node-metrics-cert\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331155 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-slash\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331169 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r2c7k\" (UniqueName: \"kubernetes.io/projected/535b87fd-9e45-4845-8569-975e6c108579-kube-api-access-r2c7k\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331189 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-env-overrides\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331193 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-ovn\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-run-openvswitch\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/535b87fd-9e45-4845-8569-975e6c108579-host-slash\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331489 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-script-lib\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.331975 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/535b87fd-9e45-4845-8569-975e6c108579-ovnkube-config\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.335293 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/535b87fd-9e45-4845-8569-975e6c108579-ovn-node-metrics-cert\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.356854 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2c7k\" (UniqueName: \"kubernetes.io/projected/535b87fd-9e45-4845-8569-975e6c108579-kube-api-access-r2c7k\") pod \"ovnkube-node-mpvnm\" (UID: \"535b87fd-9e45-4845-8569-975e6c108579\") " pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.392968 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.710968 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovnkube-controller/3.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.714552 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-acl-logging/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715229 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nbmbh_963bf1ca-b871-4cad-a1fc-cf829a70a81a/ovn-controller/0.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715647 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715675 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715688 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715699 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715710 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715719 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715728 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" exitCode=143 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715737 5136 generic.go:334] "Generic (PLEG): container finished" podID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" exitCode=143 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715724 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715829 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715847 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715906 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.715849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717664 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717728 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717747 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717761 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717773 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717785 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717796 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717807 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717847 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717859 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717876 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717894 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.717995 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718009 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718021 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718033 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718045 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718058 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718069 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718081 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718092 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718129 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718143 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718155 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718168 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718183 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718198 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718212 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718228 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718243 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718258 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718278 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nbmbh" event={"ID":"963bf1ca-b871-4cad-a1fc-cf829a70a81a","Type":"ContainerDied","Data":"fc8a676d87b2c6b9273e55ddfc0af4b456dbdcc2adee4a1bbfceebb87273789e"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718302 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718321 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718336 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718353 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718364 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718376 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718387 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718398 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718409 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.718420 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.722537 5136 generic.go:334] "Generic (PLEG): container finished" podID="535b87fd-9e45-4845-8569-975e6c108579" containerID="5bf0954043fc96f805117862c6e3ed58f957eef384f93e52da58e461a7705fc5" exitCode=0 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.722636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerDied","Data":"5bf0954043fc96f805117862c6e3ed58f957eef384f93e52da58e461a7705fc5"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.722668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"79a26dfd433adb18868a093558a28312911f88ed08cdf49137a074913d6b45ce"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.726925 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/2.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728031 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728081 5136 generic.go:334] "Generic (PLEG): container finished" podID="263c5427-a835-40c6-93cb-4bb66a83ea5b" containerID="758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924" exitCode=2 Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728140 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerDied","Data":"758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.728205 5136 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644"} Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.729070 5136 scope.go:117] "RemoveContainer" containerID="758a96f3880280b2cc4897f196524e1c9a081a903d0afe658e33991167460924" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.743782 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.751708 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nbmbh"] Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.754782 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.786173 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.843916 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.866195 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.882427 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.894984 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.906289 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.923696 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.949747 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984274 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.984619 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984648 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984669 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.984853 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984875 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.984888 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.985043 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985060 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985073 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.985394 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985429 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985453 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.985674 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985693 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.985706 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986036 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986115 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986177 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986448 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986469 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986482 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986684 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986706 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.986721 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.986999 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987020 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987034 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: E0320 07:04:34.987439 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987500 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987557 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987889 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.987930 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.988617 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.988673 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989011 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989029 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989322 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989354 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989698 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.989718 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990074 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990096 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990469 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990487 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990700 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.990722 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991052 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991113 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991415 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991440 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991758 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.991803 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992104 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992126 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992361 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992378 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992651 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992668 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992927 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.992944 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993404 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993446 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993706 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.993728 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994087 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994141 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994372 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994410 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994701 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.994756 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995115 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995184 5136 scope.go:117] "RemoveContainer" containerID="07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995470 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c"} err="failed to get container status \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": rpc error: code = NotFound desc = could not find container \"07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c\": container with ID starting with 07154fa7d25c95518047eca715f71cb3edf8efcd781856260ce8aa01d3c0969c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995519 5136 scope.go:117] "RemoveContainer" containerID="2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995791 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383"} err="failed to get container status \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": rpc error: code = NotFound desc = could not find container \"2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383\": container with ID starting with 2a400775c1d89d491236b45f5c8fa06ce3b9aac901299af3ed6b91e66eba1383 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.995821 5136 scope.go:117] "RemoveContainer" containerID="f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996078 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568"} err="failed to get container status \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": rpc error: code = NotFound desc = could not find container \"f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568\": container with ID starting with f961a7b7db9c8b8eb9c736ac00b96d860037d9ba4626fe65e1d60b44ddac2568 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996115 5136 scope.go:117] "RemoveContainer" containerID="84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996429 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200"} err="failed to get container status \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": rpc error: code = NotFound desc = could not find container \"84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200\": container with ID starting with 84ddad5dcd7e1d582a371bb5d9c48b0df1b8628ec3a497f3b8bdaa088ec7c200 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996449 5136 scope.go:117] "RemoveContainer" containerID="a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996689 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566"} err="failed to get container status \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": rpc error: code = NotFound desc = could not find container \"a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566\": container with ID starting with a9838513693cb8c594b49c55ebeffe32da682721641226072c64401f6358d566 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.996707 5136 scope.go:117] "RemoveContainer" containerID="0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.997049 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4"} err="failed to get container status \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": rpc error: code = NotFound desc = could not find container \"0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4\": container with ID starting with 0c835a9c0b6b65ebcbb51f6df758bf07946b40618305cae58b7fcf7f628d43b4 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.997086 5136 scope.go:117] "RemoveContainer" containerID="09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998232 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c"} err="failed to get container status \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": rpc error: code = NotFound desc = could not find container \"09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c\": container with ID starting with 09c4e0d931858138a83a3509c58fce340ee6863763be6262b36833b1ee17000c not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998255 5136 scope.go:117] "RemoveContainer" containerID="47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998492 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3"} err="failed to get container status \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": rpc error: code = NotFound desc = could not find container \"47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3\": container with ID starting with 47bce7e96ce0bde967b1e07656a16e2e20a58fcaaa776b4e8a077a8e7df439d3 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.998601 5136 scope.go:117] "RemoveContainer" containerID="0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.999020 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983"} err="failed to get container status \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": rpc error: code = NotFound desc = could not find container \"0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983\": container with ID starting with 0725e0325222092cdca7bbb3b09686542df27a043b715973d793953bfc762983 not found: ID does not exist" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.999055 5136 scope.go:117] "RemoveContainer" containerID="ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8" Mar 20 07:04:34 crc kubenswrapper[5136]: I0320 07:04:34.999486 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8"} err="failed to get container status \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": rpc error: code = NotFound desc = could not find container \"ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8\": container with ID starting with ef4b822024329075b1c7c38629903833fa36ade53383b1976e24a4b4fc00ebb8 not found: ID does not exist" Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.734203 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/2.log" Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.734928 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/1.log" Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.735005 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-tjpps" event={"ID":"263c5427-a835-40c6-93cb-4bb66a83ea5b","Type":"ContainerStarted","Data":"a98c4a1acebc55db9d4281f82e81128e4f642fe57246d764a74b3c3bee296982"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"bcf75fb70207c68b87c45ca1e4e718f8049241692381a392a32408d2e09d38ee"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"bff01ad626cc75505ef638c801e2acae3c5169f5760b75211664e8a2dc2b714b"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738616 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"c773d488f1a80fa882be81f2603f998a7ee455721f6a800ef1da1ba448cc33c0"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738628 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"d8c7380730bdf91a603abaa0f907b7b002cc000a943e8b0c0b24a23a892c717d"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738638 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"122192167ce6e33ae405c6e26b212239444d03ced803bc78ecd58b02d2894953"} Mar 20 07:04:35 crc kubenswrapper[5136]: I0320 07:04:35.738646 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"27f43d04b6ab7cc968bd5f9c09ac0ab7e295f2f4ac9a2f462a8dbdffc3c7cf0f"} Mar 20 07:04:36 crc kubenswrapper[5136]: I0320 07:04:36.404217 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="963bf1ca-b871-4cad-a1fc-cf829a70a81a" path="/var/lib/kubelet/pods/963bf1ca-b871-4cad-a1fc-cf829a70a81a/volumes" Mar 20 07:04:38 crc kubenswrapper[5136]: I0320 07:04:38.761702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"85404a2f32750d0e85f9ee96e92a26a710ef171ee369d492d225748b50b6e11c"} Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" event={"ID":"535b87fd-9e45-4845-8569-975e6c108579","Type":"ContainerStarted","Data":"bdaab549146e0ca42d9039984583fe4ce4d2a58cdd7fb324193ab608ff7351d2"} Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777658 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777684 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.777701 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.807139 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" podStartSLOduration=6.80711663 podStartE2EDuration="6.80711663s" podCreationTimestamp="2026-03-20 07:04:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:04:40.805162469 +0000 UTC m=+913.064473630" watchObservedRunningTime="2026-03-20 07:04:40.80711663 +0000 UTC m=+913.066427791" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.809724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:40 crc kubenswrapper[5136]: I0320 07:04:40.813943 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.268698 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.269993 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.271942 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.271959 5136 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-65jln" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.272140 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.272141 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.280328 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.353754 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.353799 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.353847 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.454840 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.454910 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.454962 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.455255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.456468 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.472499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"crc-storage-crc-jqplr\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.588589 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626256 5136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626454 5136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626586 5136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.626730 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(226a66fe9628635433b4f4b55ef256408541fc48c6791a51cfb7623df2a3a600): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jqplr" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.795165 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: I0320 07:04:43.795779 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819464 5136 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819554 5136 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819625 5136 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:43 crc kubenswrapper[5136]: E0320 07:04:43.819746 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-jqplr_crc-storage(868b5502-6c3e-4e3b-bc43-c0875e71512f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-jqplr_crc-storage_868b5502-6c3e-4e3b-bc43-c0875e71512f_0(e9e46ce0c63674d6f5f1d145700c5ff2dfbf3e102108a3dde3acea23e3ddd701): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-jqplr" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" Mar 20 07:04:45 crc kubenswrapper[5136]: I0320 07:04:45.822133 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:04:45 crc kubenswrapper[5136]: I0320 07:04:45.822963 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.396578 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.397559 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.596158 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 07:04:56 crc kubenswrapper[5136]: W0320 07:04:56.601398 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod868b5502_6c3e_4e3b_bc43_c0875e71512f.slice/crio-f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658 WatchSource:0}: Error finding container f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658: Status 404 returned error can't find the container with id f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658 Mar 20 07:04:56 crc kubenswrapper[5136]: I0320 07:04:56.878954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jqplr" event={"ID":"868b5502-6c3e-4e3b-bc43-c0875e71512f","Type":"ContainerStarted","Data":"f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658"} Mar 20 07:04:58 crc kubenswrapper[5136]: I0320 07:04:58.894154 5136 generic.go:334] "Generic (PLEG): container finished" podID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerID="1634bfed9d3426f391a9ba220363e60d18b7a13e0b5dd7787df7f812b3c4e0ea" exitCode=0 Mar 20 07:04:58 crc kubenswrapper[5136]: I0320 07:04:58.894246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jqplr" event={"ID":"868b5502-6c3e-4e3b-bc43-c0875e71512f","Type":"ContainerDied","Data":"1634bfed9d3426f391a9ba220363e60d18b7a13e0b5dd7787df7f812b3c4e0ea"} Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.222158 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") pod \"868b5502-6c3e-4e3b-bc43-c0875e71512f\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") pod \"868b5502-6c3e-4e3b-bc43-c0875e71512f\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") pod \"868b5502-6c3e-4e3b-bc43-c0875e71512f\" (UID: \"868b5502-6c3e-4e3b-bc43-c0875e71512f\") " Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.283569 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "868b5502-6c3e-4e3b-bc43-c0875e71512f" (UID: "868b5502-6c3e-4e3b-bc43-c0875e71512f"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.289643 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s" (OuterVolumeSpecName: "kube-api-access-shp6s") pod "868b5502-6c3e-4e3b-bc43-c0875e71512f" (UID: "868b5502-6c3e-4e3b-bc43-c0875e71512f"). InnerVolumeSpecName "kube-api-access-shp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.296428 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "868b5502-6c3e-4e3b-bc43-c0875e71512f" (UID: "868b5502-6c3e-4e3b-bc43-c0875e71512f"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.385116 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shp6s\" (UniqueName: \"kubernetes.io/projected/868b5502-6c3e-4e3b-bc43-c0875e71512f-kube-api-access-shp6s\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.385159 5136 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/868b5502-6c3e-4e3b-bc43-c0875e71512f-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.385173 5136 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/868b5502-6c3e-4e3b-bc43-c0875e71512f-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.909286 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-jqplr" event={"ID":"868b5502-6c3e-4e3b-bc43-c0875e71512f","Type":"ContainerDied","Data":"f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658"} Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.909331 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f12c399e84018d1e9e13b1d5c82efd2a9d048d7d148817160148b821359658" Mar 20 07:05:00 crc kubenswrapper[5136]: I0320 07:05:00.910624 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-jqplr" Mar 20 07:05:04 crc kubenswrapper[5136]: I0320 07:05:04.431421 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-mpvnm" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.541935 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj"] Mar 20 07:05:08 crc kubenswrapper[5136]: E0320 07:05:08.542480 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerName="storage" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.542495 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerName="storage" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.542618 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" containerName="storage" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.543508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.549456 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.563711 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj"] Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.627966 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.628032 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.628087 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.729430 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.729581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.729629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.730036 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.730097 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.747519 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:08 crc kubenswrapper[5136]: I0320 07:05:08.862209 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.053872 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj"] Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.978304 5136 generic.go:334] "Generic (PLEG): container finished" podID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerID="428e31f96cc381a34b7745bdf5482cfd780f7df9c2bdb635cbc9f4da3377165b" exitCode=0 Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.979317 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"428e31f96cc381a34b7745bdf5482cfd780f7df9c2bdb635cbc9f4da3377165b"} Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.979345 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerStarted","Data":"89082b0913ce4f22585d5f49a80b816f5fde9425995a9cacb4fd7966f3c48e3b"} Mar 20 07:05:09 crc kubenswrapper[5136]: I0320 07:05:09.980530 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:05:11 crc kubenswrapper[5136]: I0320 07:05:11.995008 5136 generic.go:334] "Generic (PLEG): container finished" podID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerID="6e15132358ac7794cb0ef54b2c6349bb1c58f1888329abd0c0e15ab7c771b98a" exitCode=0 Mar 20 07:05:11 crc kubenswrapper[5136]: I0320 07:05:11.995099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"6e15132358ac7794cb0ef54b2c6349bb1c58f1888329abd0c0e15ab7c771b98a"} Mar 20 07:05:13 crc kubenswrapper[5136]: I0320 07:05:13.005367 5136 generic.go:334] "Generic (PLEG): container finished" podID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerID="b5b6ddf7cefec18aee7e2f69a837e4eb2c5c360df1caf1a8b0df1d45126e5b2d" exitCode=0 Mar 20 07:05:13 crc kubenswrapper[5136]: I0320 07:05:13.005468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"b5b6ddf7cefec18aee7e2f69a837e4eb2c5c360df1caf1a8b0df1d45126e5b2d"} Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.334175 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.403229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") pod \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.403582 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") pod \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.403635 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") pod \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\" (UID: \"3cef4dfa-acd1-43f2-adaa-3af5f28046f9\") " Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.405663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle" (OuterVolumeSpecName: "bundle") pod "3cef4dfa-acd1-43f2-adaa-3af5f28046f9" (UID: "3cef4dfa-acd1-43f2-adaa-3af5f28046f9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.413402 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk" (OuterVolumeSpecName: "kube-api-access-mfmtk") pod "3cef4dfa-acd1-43f2-adaa-3af5f28046f9" (UID: "3cef4dfa-acd1-43f2-adaa-3af5f28046f9"). InnerVolumeSpecName "kube-api-access-mfmtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.418659 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util" (OuterVolumeSpecName: "util") pod "3cef4dfa-acd1-43f2-adaa-3af5f28046f9" (UID: "3cef4dfa-acd1-43f2-adaa-3af5f28046f9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.505497 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mfmtk\" (UniqueName: \"kubernetes.io/projected/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-kube-api-access-mfmtk\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.505547 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:14 crc kubenswrapper[5136]: I0320 07:05:14.505566 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3cef4dfa-acd1-43f2-adaa-3af5f28046f9-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.018531 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" event={"ID":"3cef4dfa-acd1-43f2-adaa-3af5f28046f9","Type":"ContainerDied","Data":"89082b0913ce4f22585d5f49a80b816f5fde9425995a9cacb4fd7966f3c48e3b"} Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.018569 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89082b0913ce4f22585d5f49a80b816f5fde9425995a9cacb4fd7966f3c48e3b" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.018666 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.822418 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.822499 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.822559 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.823614 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:05:15 crc kubenswrapper[5136]: I0320 07:05:15.823713 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52" gracePeriod=600 Mar 20 07:05:16 crc kubenswrapper[5136]: I0320 07:05:16.030506 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52" exitCode=0 Mar 20 07:05:16 crc kubenswrapper[5136]: I0320 07:05:16.030585 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52"} Mar 20 07:05:16 crc kubenswrapper[5136]: I0320 07:05:16.031155 5136 scope.go:117] "RemoveContainer" containerID="1f8f243bed8a27d330b6c6e9f13f637dfb5738fe5b93ac9f02957069f4cfce8a" Mar 20 07:05:17 crc kubenswrapper[5136]: I0320 07:05:17.039483 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f"} Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.073535 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-mzffz"] Mar 20 07:05:18 crc kubenswrapper[5136]: E0320 07:05:18.074105 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="util" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074122 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="util" Mar 20 07:05:18 crc kubenswrapper[5136]: E0320 07:05:18.074132 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="pull" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074142 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="pull" Mar 20 07:05:18 crc kubenswrapper[5136]: E0320 07:05:18.074169 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="extract" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074178 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="extract" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074300 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cef4dfa-acd1-43f2-adaa-3af5f28046f9" containerName="extract" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.074742 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.076392 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8wbjw" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.076776 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.076976 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.117784 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-mzffz"] Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.169575 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzklz\" (UniqueName: \"kubernetes.io/projected/94018849-bf2a-47b4-be05-5e9ff0e0dfbd-kube-api-access-xzklz\") pod \"nmstate-operator-796d4cfff4-mzffz\" (UID: \"94018849-bf2a-47b4-be05-5e9ff0e0dfbd\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.271040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzklz\" (UniqueName: \"kubernetes.io/projected/94018849-bf2a-47b4-be05-5e9ff0e0dfbd-kube-api-access-xzklz\") pod \"nmstate-operator-796d4cfff4-mzffz\" (UID: \"94018849-bf2a-47b4-be05-5e9ff0e0dfbd\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.288634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzklz\" (UniqueName: \"kubernetes.io/projected/94018849-bf2a-47b4-be05-5e9ff0e0dfbd-kube-api-access-xzklz\") pod \"nmstate-operator-796d4cfff4-mzffz\" (UID: \"94018849-bf2a-47b4-be05-5e9ff0e0dfbd\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.389254 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" Mar 20 07:05:18 crc kubenswrapper[5136]: I0320 07:05:18.582651 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-mzffz"] Mar 20 07:05:19 crc kubenswrapper[5136]: I0320 07:05:19.049412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" event={"ID":"94018849-bf2a-47b4-be05-5e9ff0e0dfbd","Type":"ContainerStarted","Data":"76997a707fd2f10632b8dd5bb0ae9f9ccb9785781e7ea26639f8901c3e69b18f"} Mar 20 07:05:21 crc kubenswrapper[5136]: I0320 07:05:21.064330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" event={"ID":"94018849-bf2a-47b4-be05-5e9ff0e0dfbd","Type":"ContainerStarted","Data":"4071e7ab7dc9dc2e18df122edb9d6a3baae7ce2f6b36ad43a35a7aea2e94d2a0"} Mar 20 07:05:21 crc kubenswrapper[5136]: I0320 07:05:21.091379 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-mzffz" podStartSLOduration=1.124671199 podStartE2EDuration="3.091353008s" podCreationTimestamp="2026-03-20 07:05:18 +0000 UTC" firstStartedPulling="2026-03-20 07:05:18.587176765 +0000 UTC m=+950.846487906" lastFinishedPulling="2026-03-20 07:05:20.553858554 +0000 UTC m=+952.813169715" observedRunningTime="2026-03-20 07:05:21.084453511 +0000 UTC m=+953.343764662" watchObservedRunningTime="2026-03-20 07:05:21.091353008 +0000 UTC m=+953.350664199" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.304107 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.305623 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.325303 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.375046 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.375091 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.375126 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476181 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476330 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476349 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.476669 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.477189 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.497943 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"redhat-marketplace-5rs9j\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.621850 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:26 crc kubenswrapper[5136]: I0320 07:05:26.885801 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:27 crc kubenswrapper[5136]: I0320 07:05:27.100946 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea"} Mar 20 07:05:27 crc kubenswrapper[5136]: I0320 07:05:27.100997 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"3ac1ac65683420394fa986da7d4411ae731402826717eacf8969f7dd09435281"} Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.110496 5136 generic.go:334] "Generic (PLEG): container finished" podID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" exitCode=0 Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.110583 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea"} Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.806681 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.807718 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.817027 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-j7mkp" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.831975 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.848961 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-k7799"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.849770 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.853295 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.880923 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-k7799"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.900240 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-7bqsc"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.901136 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.919507 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdvp\" (UniqueName: \"kubernetes.io/projected/21fd222d-3101-4c49-bbca-611916a57ae8-kube-api-access-fwdvp\") pod \"nmstate-metrics-9b8c8685d-dxl94\" (UID: \"21fd222d-3101-4c49-bbca-611916a57ae8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.919587 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8bnn\" (UniqueName: \"kubernetes.io/projected/a6f3f958-ebef-4d11-be1e-1cd2d431006c-kube-api-access-q8bnn\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.919634 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a6f3f958-ebef-4d11-be1e-1cd2d431006c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.994389 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf"] Mar 20 07:05:28 crc kubenswrapper[5136]: I0320 07:05:28.999203 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.000773 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf"] Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.004543 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.004705 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.005848 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nb66b" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.006687 5136 scope.go:117] "RemoveContainer" containerID="1c01112d95a6d7df1bf6ad50f71f8cfb4347ff1551a5c84b8e28ff7554122644" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021435 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a6f3f958-ebef-4d11-be1e-1cd2d431006c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021517 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdvp\" (UniqueName: \"kubernetes.io/projected/21fd222d-3101-4c49-bbca-611916a57ae8-kube-api-access-fwdvp\") pod \"nmstate-metrics-9b8c8685d-dxl94\" (UID: \"21fd222d-3101-4c49-bbca-611916a57ae8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-dbus-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021575 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-ovs-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-nmstate-lock\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021648 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8bnn\" (UniqueName: \"kubernetes.io/projected/a6f3f958-ebef-4d11-be1e-1cd2d431006c-kube-api-access-q8bnn\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.021677 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22tj2\" (UniqueName: \"kubernetes.io/projected/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-kube-api-access-22tj2\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.044157 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/a6f3f958-ebef-4d11-be1e-1cd2d431006c-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.061726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdvp\" (UniqueName: \"kubernetes.io/projected/21fd222d-3101-4c49-bbca-611916a57ae8-kube-api-access-fwdvp\") pod \"nmstate-metrics-9b8c8685d-dxl94\" (UID: \"21fd222d-3101-4c49-bbca-611916a57ae8\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.061939 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8bnn\" (UniqueName: \"kubernetes.io/projected/a6f3f958-ebef-4d11-be1e-1cd2d431006c-kube-api-access-q8bnn\") pod \"nmstate-webhook-5f558f5558-k7799\" (UID: \"a6f3f958-ebef-4d11-be1e-1cd2d431006c\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122574 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bdd0e88-cfa4-410a-b619-7918a813120d-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-nmstate-lock\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122667 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122726 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22tj2\" (UniqueName: \"kubernetes.io/projected/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-kube-api-access-22tj2\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122763 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svx8d\" (UniqueName: \"kubernetes.io/projected/3bdd0e88-cfa4-410a-b619-7918a813120d-kube-api-access-svx8d\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122860 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-tjpps_263c5427-a835-40c6-93cb-4bb66a83ea5b/kube-multus/2.log" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122798 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-nmstate-lock\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.122996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-dbus-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.123051 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-ovs-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.123137 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-ovs-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.123317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-dbus-socket\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.125026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95"} Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.147456 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22tj2\" (UniqueName: \"kubernetes.io/projected/43a9811e-7a36-4f11-9f02-ac3e4c00c42d-kube-api-access-22tj2\") pod \"nmstate-handler-7bqsc\" (UID: \"43a9811e-7a36-4f11-9f02-ac3e4c00c42d\") " pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.165974 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.172668 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d7dd7d448-jtlk5"] Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.173464 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.189082 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d7dd7d448-jtlk5"] Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.195767 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224257 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-oauth-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-oauth-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224308 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svx8d\" (UniqueName: \"kubernetes.io/projected/3bdd0e88-cfa4-410a-b619-7918a813120d-kube-api-access-svx8d\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224332 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-service-ca\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224366 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-trusted-ca-bundle\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224389 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkbzh\" (UniqueName: \"kubernetes.io/projected/1d701525-fad2-4a68-8594-a7d5020c6883-kube-api-access-dkbzh\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224417 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bdd0e88-cfa4-410a-b619-7918a813120d-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: E0320 07:05:29.224419 5136 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224471 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.224448 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-console-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: E0320 07:05:29.224555 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert podName:3bdd0e88-cfa4-410a-b619-7918a813120d nodeName:}" failed. No retries permitted until 2026-03-20 07:05:29.724507657 +0000 UTC m=+961.983818808 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-rsxkf" (UID: "3bdd0e88-cfa4-410a-b619-7918a813120d") : secret "plugin-serving-cert" not found Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.225351 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.225473 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/3bdd0e88-cfa4-410a-b619-7918a813120d-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.243118 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svx8d\" (UniqueName: \"kubernetes.io/projected/3bdd0e88-cfa4-410a-b619-7918a813120d-kube-api-access-svx8d\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.293026 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43a9811e_7a36_4f11_9f02_ac3e4c00c42d.slice/crio-709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70 WatchSource:0}: Error finding container 709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70: Status 404 returned error can't find the container with id 709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326656 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-console-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326722 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-oauth-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-oauth-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326854 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-service-ca\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326886 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-trusted-ca-bundle\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.326905 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkbzh\" (UniqueName: \"kubernetes.io/projected/1d701525-fad2-4a68-8594-a7d5020c6883-kube-api-access-dkbzh\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.327961 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-console-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.328793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-oauth-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.328912 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-service-ca\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.329576 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1d701525-fad2-4a68-8594-a7d5020c6883-trusted-ca-bundle\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.333678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-oauth-config\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.334245 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/1d701525-fad2-4a68-8594-a7d5020c6883-console-serving-cert\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.342837 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkbzh\" (UniqueName: \"kubernetes.io/projected/1d701525-fad2-4a68-8594-a7d5020c6883-kube-api-access-dkbzh\") pod \"console-d7dd7d448-jtlk5\" (UID: \"1d701525-fad2-4a68-8594-a7d5020c6883\") " pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.378646 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94"] Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.385507 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21fd222d_3101_4c49_bbca_611916a57ae8.slice/crio-615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5 WatchSource:0}: Error finding container 615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5: Status 404 returned error can't find the container with id 615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.452393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-k7799"] Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.457290 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6f3f958_ebef_4d11_be1e_1cd2d431006c.slice/crio-7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52 WatchSource:0}: Error finding container 7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52: Status 404 returned error can't find the container with id 7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.497500 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.681968 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d7dd7d448-jtlk5"] Mar 20 07:05:29 crc kubenswrapper[5136]: W0320 07:05:29.692703 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d701525_fad2_4a68_8594_a7d5020c6883.slice/crio-7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9 WatchSource:0}: Error finding container 7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9: Status 404 returned error can't find the container with id 7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9 Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.733128 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.738759 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/3bdd0e88-cfa4-410a-b619-7918a813120d-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-rsxkf\" (UID: \"3bdd0e88-cfa4-410a-b619-7918a813120d\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:29 crc kubenswrapper[5136]: I0320 07:05:29.914859 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.099860 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf"] Mar 20 07:05:30 crc kubenswrapper[5136]: W0320 07:05:30.112026 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bdd0e88_cfa4_410a_b619_7918a813120d.slice/crio-958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2 WatchSource:0}: Error finding container 958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2: Status 404 returned error can't find the container with id 958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2 Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.136288 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" event={"ID":"3bdd0e88-cfa4-410a-b619-7918a813120d","Type":"ContainerStarted","Data":"958e00c6e35162a8b8c7eb9336c6f7853a82a87997f76b8dc4a11c8834c3dbd2"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.137625 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" event={"ID":"21fd222d-3101-4c49-bbca-611916a57ae8","Type":"ContainerStarted","Data":"615f384adc63751c0e3815a5e1dda99c0f2ca809e1c494525e30531c7c0a70d5"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.138564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" event={"ID":"a6f3f958-ebef-4d11-be1e-1cd2d431006c","Type":"ContainerStarted","Data":"7fdd004d3e27527e5b18765e50f822e124239301b750c06ebc8d50d713272b52"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.139758 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7dd7d448-jtlk5" event={"ID":"1d701525-fad2-4a68-8594-a7d5020c6883","Type":"ContainerStarted","Data":"ada7116ce017937b8f0a9a111686b7c7a64ae8609eb962c8ceca25296b62bd8e"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.139784 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d7dd7d448-jtlk5" event={"ID":"1d701525-fad2-4a68-8594-a7d5020c6883","Type":"ContainerStarted","Data":"7873cdb5c88cce32d3e577cf74b78c7f19428662efa1a82d94c373e220f588c9"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.140766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7bqsc" event={"ID":"43a9811e-7a36-4f11-9f02-ac3e4c00c42d","Type":"ContainerStarted","Data":"709dfa8c7706b93d121ad6665b59eb754d82e9491db881215c3bda46ec7c5c70"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.145301 5136 generic.go:334] "Generic (PLEG): container finished" podID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" exitCode=0 Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.145330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95"} Mar 20 07:05:30 crc kubenswrapper[5136]: I0320 07:05:30.163038 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d7dd7d448-jtlk5" podStartSLOduration=1.163017706 podStartE2EDuration="1.163017706s" podCreationTimestamp="2026-03-20 07:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:05:30.159068823 +0000 UTC m=+962.418380004" watchObservedRunningTime="2026-03-20 07:05:30.163017706 +0000 UTC m=+962.422328867" Mar 20 07:05:31 crc kubenswrapper[5136]: I0320 07:05:31.157086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerStarted","Data":"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6"} Mar 20 07:05:31 crc kubenswrapper[5136]: I0320 07:05:31.173112 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5rs9j" podStartSLOduration=2.643470388 podStartE2EDuration="5.173089568s" podCreationTimestamp="2026-03-20 07:05:26 +0000 UTC" firstStartedPulling="2026-03-20 07:05:28.112476548 +0000 UTC m=+960.371787709" lastFinishedPulling="2026-03-20 07:05:30.642095728 +0000 UTC m=+962.901406889" observedRunningTime="2026-03-20 07:05:31.172290134 +0000 UTC m=+963.431601295" watchObservedRunningTime="2026-03-20 07:05:31.173089568 +0000 UTC m=+963.432400719" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.173186 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" event={"ID":"a6f3f958-ebef-4d11-be1e-1cd2d431006c","Type":"ContainerStarted","Data":"e58a00bc58c6fd5a4a4e721567778597409a6f4dc3ca2e1856e95c2cfd9d8bc9"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.173987 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.176702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-7bqsc" event={"ID":"43a9811e-7a36-4f11-9f02-ac3e4c00c42d","Type":"ContainerStarted","Data":"53317609cf6b412238f8997f579f91bb64920e388729cadbb6fd564056bcf41f"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.176868 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.178712 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" event={"ID":"3bdd0e88-cfa4-410a-b619-7918a813120d","Type":"ContainerStarted","Data":"38a3dcf7fe14a3caebdc8d3d4f563149c7861e752afc066b5ef4a3d2abd15437"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.180572 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" event={"ID":"21fd222d-3101-4c49-bbca-611916a57ae8","Type":"ContainerStarted","Data":"cfc87f4736255fedf55915c3a92a5749bd2cefc197e5ea1195c401a8324f1128"} Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.197570 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" podStartSLOduration=1.959258296 podStartE2EDuration="5.197533599s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:29.460395174 +0000 UTC m=+961.719706325" lastFinishedPulling="2026-03-20 07:05:32.698670477 +0000 UTC m=+964.957981628" observedRunningTime="2026-03-20 07:05:33.189188297 +0000 UTC m=+965.448499448" watchObservedRunningTime="2026-03-20 07:05:33.197533599 +0000 UTC m=+965.456844760" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.239284 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-7bqsc" podStartSLOduration=1.834896477 podStartE2EDuration="5.239264287s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:29.302975998 +0000 UTC m=+961.562287149" lastFinishedPulling="2026-03-20 07:05:32.707343808 +0000 UTC m=+964.966654959" observedRunningTime="2026-03-20 07:05:33.238596467 +0000 UTC m=+965.497907618" watchObservedRunningTime="2026-03-20 07:05:33.239264287 +0000 UTC m=+965.498575438" Mar 20 07:05:33 crc kubenswrapper[5136]: I0320 07:05:33.260585 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-rsxkf" podStartSLOduration=2.676025173 podStartE2EDuration="5.260567536s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:30.113725721 +0000 UTC m=+962.373036872" lastFinishedPulling="2026-03-20 07:05:32.698268084 +0000 UTC m=+964.957579235" observedRunningTime="2026-03-20 07:05:33.258332786 +0000 UTC m=+965.517643937" watchObservedRunningTime="2026-03-20 07:05:33.260567536 +0000 UTC m=+965.519878687" Mar 20 07:05:35 crc kubenswrapper[5136]: I0320 07:05:35.198850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" event={"ID":"21fd222d-3101-4c49-bbca-611916a57ae8","Type":"ContainerStarted","Data":"bf8589fddb236ddf7aa044ac3b5d89aef5a7acfd6d11b710bef35338be588442"} Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.622708 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.622754 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.694158 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:36 crc kubenswrapper[5136]: I0320 07:05:36.720224 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-dxl94" podStartSLOduration=3.186077606 podStartE2EDuration="8.720201078s" podCreationTimestamp="2026-03-20 07:05:28 +0000 UTC" firstStartedPulling="2026-03-20 07:05:29.388577262 +0000 UTC m=+961.647888413" lastFinishedPulling="2026-03-20 07:05:34.922700734 +0000 UTC m=+967.182011885" observedRunningTime="2026-03-20 07:05:35.227272675 +0000 UTC m=+967.486583826" watchObservedRunningTime="2026-03-20 07:05:36.720201078 +0000 UTC m=+968.979512269" Mar 20 07:05:37 crc kubenswrapper[5136]: I0320 07:05:37.276079 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:37 crc kubenswrapper[5136]: I0320 07:05:37.330120 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.228649 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5rs9j" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" containerID="cri-o://b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" gracePeriod=2 Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.249073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-7bqsc" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.498323 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.498636 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.507793 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.645335 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.670677 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") pod \"69b32ea3-4438-4807-9a81-41026ec34ad8\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.670845 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") pod \"69b32ea3-4438-4807-9a81-41026ec34ad8\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.670887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") pod \"69b32ea3-4438-4807-9a81-41026ec34ad8\" (UID: \"69b32ea3-4438-4807-9a81-41026ec34ad8\") " Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.671664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities" (OuterVolumeSpecName: "utilities") pod "69b32ea3-4438-4807-9a81-41026ec34ad8" (UID: "69b32ea3-4438-4807-9a81-41026ec34ad8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.678587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9" (OuterVolumeSpecName: "kube-api-access-7ldh9") pod "69b32ea3-4438-4807-9a81-41026ec34ad8" (UID: "69b32ea3-4438-4807-9a81-41026ec34ad8"). InnerVolumeSpecName "kube-api-access-7ldh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.693450 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69b32ea3-4438-4807-9a81-41026ec34ad8" (UID: "69b32ea3-4438-4807-9a81-41026ec34ad8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.772675 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.772727 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69b32ea3-4438-4807-9a81-41026ec34ad8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:39 crc kubenswrapper[5136]: I0320 07:05:39.772752 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ldh9\" (UniqueName: \"kubernetes.io/projected/69b32ea3-4438-4807-9a81-41026ec34ad8-kube-api-access-7ldh9\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236296 5136 generic.go:334] "Generic (PLEG): container finished" podID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" exitCode=0 Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236377 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5rs9j" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236369 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6"} Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5rs9j" event={"ID":"69b32ea3-4438-4807-9a81-41026ec34ad8","Type":"ContainerDied","Data":"3ac1ac65683420394fa986da7d4411ae731402826717eacf8969f7dd09435281"} Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.236936 5136 scope.go:117] "RemoveContainer" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.243653 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-d7dd7d448-jtlk5" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.267503 5136 scope.go:117] "RemoveContainer" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.287538 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.294998 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5rs9j"] Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.304094 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.312190 5136 scope.go:117] "RemoveContainer" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.344400 5136 scope.go:117] "RemoveContainer" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" Mar 20 07:05:40 crc kubenswrapper[5136]: E0320 07:05:40.344913 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6\": container with ID starting with b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6 not found: ID does not exist" containerID="b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.344943 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6"} err="failed to get container status \"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6\": rpc error: code = NotFound desc = could not find container \"b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6\": container with ID starting with b05c4a3c4517ab944ca7d69a16a3e8fedde605ac2f10e67681f516572dc40bb6 not found: ID does not exist" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.344962 5136 scope.go:117] "RemoveContainer" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" Mar 20 07:05:40 crc kubenswrapper[5136]: E0320 07:05:40.349011 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95\": container with ID starting with 99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95 not found: ID does not exist" containerID="99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.349079 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95"} err="failed to get container status \"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95\": rpc error: code = NotFound desc = could not find container \"99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95\": container with ID starting with 99020cf926b90ef473a75ca9d43847ea5e162fdffd7c1c073e067327673f7a95 not found: ID does not exist" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.349120 5136 scope.go:117] "RemoveContainer" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" Mar 20 07:05:40 crc kubenswrapper[5136]: E0320 07:05:40.349465 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea\": container with ID starting with bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea not found: ID does not exist" containerID="bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.349491 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea"} err="failed to get container status \"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea\": rpc error: code = NotFound desc = could not find container \"bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea\": container with ID starting with bc0eebe03fa0a2eff9b96cce2d6b25860cbf759d0d063c2d2b7e4310b06c1aea not found: ID does not exist" Mar 20 07:05:40 crc kubenswrapper[5136]: I0320 07:05:40.403222 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" path="/var/lib/kubelet/pods/69b32ea3-4438-4807-9a81-41026ec34ad8/volumes" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.503658 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:42 crc kubenswrapper[5136]: E0320 07:05:42.504659 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-utilities" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.504691 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-utilities" Mar 20 07:05:42 crc kubenswrapper[5136]: E0320 07:05:42.504720 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-content" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.504737 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="extract-content" Mar 20 07:05:42 crc kubenswrapper[5136]: E0320 07:05:42.504775 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.504791 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.505083 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="69b32ea3-4438-4807-9a81-41026ec34ad8" containerName="registry-server" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.506854 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.511621 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.609352 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.609399 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.609425 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.710852 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711073 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711143 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.711798 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.732007 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"certified-operators-bv4vd\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:42 crc kubenswrapper[5136]: I0320 07:05:42.837896 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:43 crc kubenswrapper[5136]: I0320 07:05:43.111984 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:43 crc kubenswrapper[5136]: W0320 07:05:43.118445 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65f83417_5dc9_4526_bcbe_5927b6ccdd8a.slice/crio-c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348 WatchSource:0}: Error finding container c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348: Status 404 returned error can't find the container with id c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348 Mar 20 07:05:43 crc kubenswrapper[5136]: I0320 07:05:43.255862 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerStarted","Data":"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da"} Mar 20 07:05:43 crc kubenswrapper[5136]: I0320 07:05:43.255913 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerStarted","Data":"c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348"} Mar 20 07:05:44 crc kubenswrapper[5136]: I0320 07:05:44.264953 5136 generic.go:334] "Generic (PLEG): container finished" podID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" exitCode=0 Mar 20 07:05:44 crc kubenswrapper[5136]: I0320 07:05:44.265023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da"} Mar 20 07:05:46 crc kubenswrapper[5136]: I0320 07:05:46.279149 5136 generic.go:334] "Generic (PLEG): container finished" podID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" exitCode=0 Mar 20 07:05:46 crc kubenswrapper[5136]: I0320 07:05:46.279211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0"} Mar 20 07:05:47 crc kubenswrapper[5136]: I0320 07:05:47.287096 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerStarted","Data":"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836"} Mar 20 07:05:47 crc kubenswrapper[5136]: I0320 07:05:47.314336 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bv4vd" podStartSLOduration=2.913591146 podStartE2EDuration="5.314314506s" podCreationTimestamp="2026-03-20 07:05:42 +0000 UTC" firstStartedPulling="2026-03-20 07:05:44.267055233 +0000 UTC m=+976.526366424" lastFinishedPulling="2026-03-20 07:05:46.667778613 +0000 UTC m=+978.927089784" observedRunningTime="2026-03-20 07:05:47.30713048 +0000 UTC m=+979.566441671" watchObservedRunningTime="2026-03-20 07:05:47.314314506 +0000 UTC m=+979.573625677" Mar 20 07:05:49 crc kubenswrapper[5136]: I0320 07:05:49.201703 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-k7799" Mar 20 07:05:52 crc kubenswrapper[5136]: I0320 07:05:52.838113 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:52 crc kubenswrapper[5136]: I0320 07:05:52.838900 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:52 crc kubenswrapper[5136]: I0320 07:05:52.882387 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:53 crc kubenswrapper[5136]: I0320 07:05:53.361443 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:53 crc kubenswrapper[5136]: I0320 07:05:53.411611 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:55 crc kubenswrapper[5136]: I0320 07:05:55.333586 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bv4vd" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" containerID="cri-o://54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" gracePeriod=2 Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.007909 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.183063 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") pod \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.183101 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") pod \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.183131 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") pod \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\" (UID: \"65f83417-5dc9-4526-bcbe-5927b6ccdd8a\") " Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.185673 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities" (OuterVolumeSpecName: "utilities") pod "65f83417-5dc9-4526-bcbe-5927b6ccdd8a" (UID: "65f83417-5dc9-4526-bcbe-5927b6ccdd8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.190411 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q" (OuterVolumeSpecName: "kube-api-access-mh64q") pod "65f83417-5dc9-4526-bcbe-5927b6ccdd8a" (UID: "65f83417-5dc9-4526-bcbe-5927b6ccdd8a"). InnerVolumeSpecName "kube-api-access-mh64q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.248581 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65f83417-5dc9-4526-bcbe-5927b6ccdd8a" (UID: "65f83417-5dc9-4526-bcbe-5927b6ccdd8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.284406 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.284449 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh64q\" (UniqueName: \"kubernetes.io/projected/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-kube-api-access-mh64q\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.284463 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65f83417-5dc9-4526-bcbe-5927b6ccdd8a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340339 5136 generic.go:334] "Generic (PLEG): container finished" podID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" exitCode=0 Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340381 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836"} Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340406 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bv4vd" event={"ID":"65f83417-5dc9-4526-bcbe-5927b6ccdd8a","Type":"ContainerDied","Data":"c993b594e4ab541537e13f505766f2f214c3f366d637f3c7b4612000f5d64348"} Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340422 5136 scope.go:117] "RemoveContainer" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.340512 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bv4vd" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.356507 5136 scope.go:117] "RemoveContainer" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.365627 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.370350 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bv4vd"] Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.389977 5136 scope.go:117] "RemoveContainer" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.402616 5136 scope.go:117] "RemoveContainer" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" Mar 20 07:05:56 crc kubenswrapper[5136]: E0320 07:05:56.403048 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836\": container with ID starting with 54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836 not found: ID does not exist" containerID="54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403082 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836"} err="failed to get container status \"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836\": rpc error: code = NotFound desc = could not find container \"54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836\": container with ID starting with 54e5fcbcef904af60d1e88e1436565d422350e64f921c705da2a9e462c2e8836 not found: ID does not exist" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403107 5136 scope.go:117] "RemoveContainer" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" Mar 20 07:05:56 crc kubenswrapper[5136]: E0320 07:05:56.403726 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0\": container with ID starting with 53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0 not found: ID does not exist" containerID="53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403756 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0"} err="failed to get container status \"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0\": rpc error: code = NotFound desc = could not find container \"53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0\": container with ID starting with 53f2dbe658afb6223277611ae804aead0552b80729cde29b5a8f3be839f9aea0 not found: ID does not exist" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.403775 5136 scope.go:117] "RemoveContainer" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.404036 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" path="/var/lib/kubelet/pods/65f83417-5dc9-4526-bcbe-5927b6ccdd8a/volumes" Mar 20 07:05:56 crc kubenswrapper[5136]: E0320 07:05:56.404187 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da\": container with ID starting with 560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da not found: ID does not exist" containerID="560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da" Mar 20 07:05:56 crc kubenswrapper[5136]: I0320 07:05:56.404312 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da"} err="failed to get container status \"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da\": rpc error: code = NotFound desc = could not find container \"560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da\": container with ID starting with 560a5a98ddeb1e1cdbc90bc720c33ffbb91993891e1882143cd0afaf1df3f5da not found: ID does not exist" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.138312 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:06:00 crc kubenswrapper[5136]: E0320 07:06:00.139217 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-utilities" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139234 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-utilities" Mar 20 07:06:00 crc kubenswrapper[5136]: E0320 07:06:00.139249 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139258 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" Mar 20 07:06:00 crc kubenswrapper[5136]: E0320 07:06:00.139275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-content" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139284 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="extract-content" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139413 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="65f83417-5dc9-4526-bcbe-5927b6ccdd8a" containerName="registry-server" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.139880 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.145947 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.148031 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.148304 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.151271 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.213053 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.214679 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.228507 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.232698 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"auto-csr-approver-29566506-bbg6r\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334354 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"auto-csr-approver-29566506-bbg6r\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334420 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.334440 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.352959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"auto-csr-approver-29566506-bbg6r\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436380 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436409 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436871 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.436886 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.454924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"community-operators-22q85\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.467567 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.531152 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.829608 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:00 crc kubenswrapper[5136]: I0320 07:06:00.981239 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:06:01 crc kubenswrapper[5136]: W0320 07:06:01.013468 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca3533ad_761e_45d8_8a1a_0e679b602e08.slice/crio-5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8 WatchSource:0}: Error finding container 5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8: Status 404 returned error can't find the container with id 5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8 Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.374723 5136 generic.go:334] "Generic (PLEG): container finished" podID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" exitCode=0 Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.374765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff"} Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.374823 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerStarted","Data":"b53b093dff156948f43b9de07506ee639a75c958288136288faaca5060f1f42b"} Mar 20 07:06:01 crc kubenswrapper[5136]: I0320 07:06:01.376789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerStarted","Data":"5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8"} Mar 20 07:06:02 crc kubenswrapper[5136]: I0320 07:06:02.383379 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerStarted","Data":"6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5"} Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.175233 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" podStartSLOduration=2.115247376 podStartE2EDuration="3.175214973s" podCreationTimestamp="2026-03-20 07:06:00 +0000 UTC" firstStartedPulling="2026-03-20 07:06:01.015612775 +0000 UTC m=+993.274923926" lastFinishedPulling="2026-03-20 07:06:02.075580372 +0000 UTC m=+994.334891523" observedRunningTime="2026-03-20 07:06:02.406287282 +0000 UTC m=+994.665598433" watchObservedRunningTime="2026-03-20 07:06:03.175214973 +0000 UTC m=+995.434526124" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.177659 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn"] Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.178623 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.180203 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.188892 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn"] Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.374209 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.374261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.374383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.392557 5136 generic.go:334] "Generic (PLEG): container finished" podID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" exitCode=0 Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.392652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7"} Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.396071 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerID="6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5" exitCode=0 Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.396134 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerDied","Data":"6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5"} Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.474996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475053 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475168 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.475800 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.492795 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.536436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:03 crc kubenswrapper[5136]: I0320 07:06:03.931845 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn"] Mar 20 07:06:03 crc kubenswrapper[5136]: W0320 07:06:03.940295 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900e35e2_638e_47f2_8943_1642ed3ccc59.slice/crio-6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af WatchSource:0}: Error finding container 6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af: Status 404 returned error can't find the container with id 6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.404976 5136 generic.go:334] "Generic (PLEG): container finished" podID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerID="b5b3a6ca6f0030d6bf6693d9a423335e56eaa24afaf72167c11ac747d010daf3" exitCode=0 Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.414241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerStarted","Data":"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556"} Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.414282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"b5b3a6ca6f0030d6bf6693d9a423335e56eaa24afaf72167c11ac747d010daf3"} Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.414299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerStarted","Data":"6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af"} Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.429530 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22q85" podStartSLOduration=2.021861287 podStartE2EDuration="4.429510543s" podCreationTimestamp="2026-03-20 07:06:00 +0000 UTC" firstStartedPulling="2026-03-20 07:06:01.377021178 +0000 UTC m=+993.636332339" lastFinishedPulling="2026-03-20 07:06:03.784670444 +0000 UTC m=+996.043981595" observedRunningTime="2026-03-20 07:06:04.425891821 +0000 UTC m=+996.685202972" watchObservedRunningTime="2026-03-20 07:06:04.429510543 +0000 UTC m=+996.688821694" Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.666400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.790617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") pod \"ca3533ad-761e-45d8-8a1a-0e679b602e08\" (UID: \"ca3533ad-761e-45d8-8a1a-0e679b602e08\") " Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.797116 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s" (OuterVolumeSpecName: "kube-api-access-zmm5s") pod "ca3533ad-761e-45d8-8a1a-0e679b602e08" (UID: "ca3533ad-761e-45d8-8a1a-0e679b602e08"). InnerVolumeSpecName "kube-api-access-zmm5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:04 crc kubenswrapper[5136]: I0320 07:06:04.892360 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmm5s\" (UniqueName: \"kubernetes.io/projected/ca3533ad-761e-45d8-8a1a-0e679b602e08-kube-api-access-zmm5s\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.364336 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-bjqjp" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" containerID="cri-o://e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" gracePeriod=15 Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.411294 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" event={"ID":"ca3533ad-761e-45d8-8a1a-0e679b602e08","Type":"ContainerDied","Data":"5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8"} Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.411335 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e639bd50b2bc055997e3634a5d52f595d691138c070a81a04011153b4f54dd8" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.411448 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566506-bbg6r" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.451992 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.457591 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566500-wd9ph"] Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.744606 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bjqjp_83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448/console/0.log" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.744920 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903519 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903655 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903674 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.903737 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") pod \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\" (UID: \"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448\") " Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.904598 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.905506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config" (OuterVolumeSpecName: "console-config") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.905548 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca" (OuterVolumeSpecName: "service-ca") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.905713 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.911786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.915553 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw" (OuterVolumeSpecName: "kube-api-access-pcvzw") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "kube-api-access-pcvzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:05 crc kubenswrapper[5136]: I0320 07:06:05.917979 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" (UID: "83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004776 5136 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-oauth-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004807 5136 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004833 5136 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004841 5136 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-console-serving-cert\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004850 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcvzw\" (UniqueName: \"kubernetes.io/projected/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-kube-api-access-pcvzw\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004859 5136 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-service-ca\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.004867 5136 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.409527 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdbefeb1-6fcf-4868-a30e-9fc5a016daf9" path="/var/lib/kubelet/pods/bdbefeb1-6fcf-4868-a30e-9fc5a016daf9/volumes" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421207 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-bjqjp_83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448/console/0.log" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421288 5136 generic.go:334] "Generic (PLEG): container finished" podID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" exitCode=2 Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerDied","Data":"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a"} Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421417 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-bjqjp" event={"ID":"83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448","Type":"ContainerDied","Data":"1705fa19bd8f5aa96bc704e7afa6e708e4641f98ce0af56ebad4536addf3960e"} Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421446 5136 scope.go:117] "RemoveContainer" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.421628 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-bjqjp" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.426294 5136 generic.go:334] "Generic (PLEG): container finished" podID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerID="ec34ad1ccc36a6e8304ae8f562dd09103af7e148747d2c2f87086dc152342de9" exitCode=0 Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.426333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"ec34ad1ccc36a6e8304ae8f562dd09103af7e148747d2c2f87086dc152342de9"} Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.476993 5136 scope.go:117] "RemoveContainer" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.479066 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 07:06:06 crc kubenswrapper[5136]: E0320 07:06:06.479371 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a\": container with ID starting with e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a not found: ID does not exist" containerID="e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.479515 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a"} err="failed to get container status \"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a\": rpc error: code = NotFound desc = could not find container \"e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a\": container with ID starting with e11b1384937ac56c4ccd8423745ee02ffbb40cd7a1ae4ab6f48a3d94535c216a not found: ID does not exist" Mar 20 07:06:06 crc kubenswrapper[5136]: I0320 07:06:06.480967 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-bjqjp"] Mar 20 07:06:07 crc kubenswrapper[5136]: I0320 07:06:07.438330 5136 generic.go:334] "Generic (PLEG): container finished" podID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerID="dbb919c0995d72952254d8a8d763e19195320f3bc7f5006cdff5290948950f74" exitCode=0 Mar 20 07:06:07 crc kubenswrapper[5136]: I0320 07:06:07.438479 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"dbb919c0995d72952254d8a8d763e19195320f3bc7f5006cdff5290948950f74"} Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.410765 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" path="/var/lib/kubelet/pods/83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448/volumes" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.743900 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.844285 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") pod \"900e35e2-638e-47f2-8943-1642ed3ccc59\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.844326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") pod \"900e35e2-638e-47f2-8943-1642ed3ccc59\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.844483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") pod \"900e35e2-638e-47f2-8943-1642ed3ccc59\" (UID: \"900e35e2-638e-47f2-8943-1642ed3ccc59\") " Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.845669 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle" (OuterVolumeSpecName: "bundle") pod "900e35e2-638e-47f2-8943-1642ed3ccc59" (UID: "900e35e2-638e-47f2-8943-1642ed3ccc59"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.851976 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd" (OuterVolumeSpecName: "kube-api-access-r7wvd") pod "900e35e2-638e-47f2-8943-1642ed3ccc59" (UID: "900e35e2-638e-47f2-8943-1642ed3ccc59"). InnerVolumeSpecName "kube-api-access-r7wvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.865788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util" (OuterVolumeSpecName: "util") pod "900e35e2-638e-47f2-8943-1642ed3ccc59" (UID: "900e35e2-638e-47f2-8943-1642ed3ccc59"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.947590 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7wvd\" (UniqueName: \"kubernetes.io/projected/900e35e2-638e-47f2-8943-1642ed3ccc59-kube-api-access-r7wvd\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.947620 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:08 crc kubenswrapper[5136]: I0320 07:06:08.947633 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/900e35e2-638e-47f2-8943-1642ed3ccc59-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:09 crc kubenswrapper[5136]: I0320 07:06:09.474600 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" event={"ID":"900e35e2-638e-47f2-8943-1642ed3ccc59","Type":"ContainerDied","Data":"6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af"} Mar 20 07:06:09 crc kubenswrapper[5136]: I0320 07:06:09.474650 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dd2228c2dd91ace53fd2f81a50c849531d9e841ff2e25ba7052f7cee86d67af" Mar 20 07:06:09 crc kubenswrapper[5136]: I0320 07:06:09.474744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn" Mar 20 07:06:09 crc kubenswrapper[5136]: E0320 07:06:09.519980 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod900e35e2_638e_47f2_8943_1642ed3ccc59.slice\": RecentStats: unable to find data in memory cache]" Mar 20 07:06:10 crc kubenswrapper[5136]: I0320 07:06:10.532073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:10 crc kubenswrapper[5136]: I0320 07:06:10.532390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:10 crc kubenswrapper[5136]: I0320 07:06:10.577052 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:11 crc kubenswrapper[5136]: I0320 07:06:11.543296 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:12 crc kubenswrapper[5136]: I0320 07:06:12.730105 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:13 crc kubenswrapper[5136]: I0320 07:06:13.495149 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-22q85" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" containerID="cri-o://aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" gracePeriod=2 Mar 20 07:06:13 crc kubenswrapper[5136]: I0320 07:06:13.855785 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.010540 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") pod \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.010629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") pod \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.010659 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") pod \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\" (UID: \"33b79f09-8ecc-4d05-87a7-94aa63e461a1\") " Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.011669 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities" (OuterVolumeSpecName: "utilities") pod "33b79f09-8ecc-4d05-87a7-94aa63e461a1" (UID: "33b79f09-8ecc-4d05-87a7-94aa63e461a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.017712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2" (OuterVolumeSpecName: "kube-api-access-pwfl2") pod "33b79f09-8ecc-4d05-87a7-94aa63e461a1" (UID: "33b79f09-8ecc-4d05-87a7-94aa63e461a1"). InnerVolumeSpecName "kube-api-access-pwfl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.086869 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "33b79f09-8ecc-4d05-87a7-94aa63e461a1" (UID: "33b79f09-8ecc-4d05-87a7-94aa63e461a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.111678 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.111714 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfl2\" (UniqueName: \"kubernetes.io/projected/33b79f09-8ecc-4d05-87a7-94aa63e461a1-kube-api-access-pwfl2\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.111729 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/33b79f09-8ecc-4d05-87a7-94aa63e461a1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.503223 5136 generic.go:334] "Generic (PLEG): container finished" podID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" exitCode=0 Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.503314 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22q85" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.503319 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556"} Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.504049 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22q85" event={"ID":"33b79f09-8ecc-4d05-87a7-94aa63e461a1","Type":"ContainerDied","Data":"b53b093dff156948f43b9de07506ee639a75c958288136288faaca5060f1f42b"} Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.504082 5136 scope.go:117] "RemoveContainer" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.523862 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.525348 5136 scope.go:117] "RemoveContainer" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.529716 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-22q85"] Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.546839 5136 scope.go:117] "RemoveContainer" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.575723 5136 scope.go:117] "RemoveContainer" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" Mar 20 07:06:14 crc kubenswrapper[5136]: E0320 07:06:14.577237 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556\": container with ID starting with aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556 not found: ID does not exist" containerID="aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577286 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556"} err="failed to get container status \"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556\": rpc error: code = NotFound desc = could not find container \"aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556\": container with ID starting with aaa06b5cf3bc8b412bf99f9b9e394a66bdaa643129ad3fa5d3faae95a20fe556 not found: ID does not exist" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577309 5136 scope.go:117] "RemoveContainer" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" Mar 20 07:06:14 crc kubenswrapper[5136]: E0320 07:06:14.577621 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7\": container with ID starting with eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7 not found: ID does not exist" containerID="eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577658 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7"} err="failed to get container status \"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7\": rpc error: code = NotFound desc = could not find container \"eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7\": container with ID starting with eadefc14d91f33ea2874a958959e71a9f09858e989134e7b3b5409ff7dad78a7 not found: ID does not exist" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.577675 5136 scope.go:117] "RemoveContainer" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" Mar 20 07:06:14 crc kubenswrapper[5136]: E0320 07:06:14.578117 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff\": container with ID starting with 4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff not found: ID does not exist" containerID="4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff" Mar 20 07:06:14 crc kubenswrapper[5136]: I0320 07:06:14.578187 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff"} err="failed to get container status \"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff\": rpc error: code = NotFound desc = could not find container \"4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff\": container with ID starting with 4f2549609d32b58f2e83a49791816d97fc18391c6b2c6a432afff935aefafdff not found: ID does not exist" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239026 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn"] Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239476 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-content" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239486 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-content" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239500 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239505 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239514 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="util" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239520 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="util" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239531 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerName="oc" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239539 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerName="oc" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239545 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="extract" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239550 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="extract" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239559 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-utilities" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239566 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="extract-utilities" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239574 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239580 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" Mar 20 07:06:16 crc kubenswrapper[5136]: E0320 07:06:16.239589 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="pull" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239594 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="pull" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239684 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" containerName="registry-server" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239692 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" containerName="oc" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239701 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="83c75bbf-dbd8-4c6f-bdf8-fea9ae8c0448" containerName="console" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.239714 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="900e35e2-638e-47f2-8943-1642ed3ccc59" containerName="extract" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.240071 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.244418 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.244595 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.245074 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.247926 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.252574 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-4kvhj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.256256 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.340077 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-webhook-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.340174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-apiservice-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.340467 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/8738cb21-39f9-4eeb-90fc-f512d95642f3-kube-api-access-82qlz\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.403339 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33b79f09-8ecc-4d05-87a7-94aa63e461a1" path="/var/lib/kubelet/pods/33b79f09-8ecc-4d05-87a7-94aa63e461a1/volumes" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.441502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/8738cb21-39f9-4eeb-90fc-f512d95642f3-kube-api-access-82qlz\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.441548 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-webhook-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.441584 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-apiservice-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.452851 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-apiservice-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.456456 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82qlz\" (UniqueName: \"kubernetes.io/projected/8738cb21-39f9-4eeb-90fc-f512d95642f3-kube-api-access-82qlz\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.464695 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/8738cb21-39f9-4eeb-90fc-f512d95642f3-webhook-cert\") pod \"metallb-operator-controller-manager-76dc698dd8-wkrqn\" (UID: \"8738cb21-39f9-4eeb-90fc-f512d95642f3\") " pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.555718 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.580276 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-787f65f959-lkczj"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.581121 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.582948 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.583384 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.583683 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-bsmt8" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.645645 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfdst\" (UniqueName: \"kubernetes.io/projected/f9ad7722-3864-444d-92a1-235de7707fe4-kube-api-access-vfdst\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.645690 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-webhook-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.645763 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-apiservice-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.656389 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-787f65f959-lkczj"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.747253 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-apiservice-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.747647 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfdst\" (UniqueName: \"kubernetes.io/projected/f9ad7722-3864-444d-92a1-235de7707fe4-kube-api-access-vfdst\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.747676 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-webhook-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.753047 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-apiservice-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.753143 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f9ad7722-3864-444d-92a1-235de7707fe4-webhook-cert\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.773962 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfdst\" (UniqueName: \"kubernetes.io/projected/f9ad7722-3864-444d-92a1-235de7707fe4-kube-api-access-vfdst\") pod \"metallb-operator-webhook-server-787f65f959-lkczj\" (UID: \"f9ad7722-3864-444d-92a1-235de7707fe4\") " pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.801853 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn"] Mar 20 07:06:16 crc kubenswrapper[5136]: I0320 07:06:16.924916 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:17 crc kubenswrapper[5136]: I0320 07:06:17.175741 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-787f65f959-lkczj"] Mar 20 07:06:17 crc kubenswrapper[5136]: W0320 07:06:17.178718 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9ad7722_3864_444d_92a1_235de7707fe4.slice/crio-d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331 WatchSource:0}: Error finding container d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331: Status 404 returned error can't find the container with id d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331 Mar 20 07:06:17 crc kubenswrapper[5136]: I0320 07:06:17.531798 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" event={"ID":"f9ad7722-3864-444d-92a1-235de7707fe4","Type":"ContainerStarted","Data":"d0f3e032e3a10434097984e90c2cb1eddd6117692de0b2915f701fe2544b7331"} Mar 20 07:06:17 crc kubenswrapper[5136]: I0320 07:06:17.532740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" event={"ID":"8738cb21-39f9-4eeb-90fc-f512d95642f3","Type":"ContainerStarted","Data":"759aed0e0b086233ee262f294980df967caf8298aca7b9bf5d7cf8926c36d601"} Mar 20 07:06:21 crc kubenswrapper[5136]: I0320 07:06:21.565659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" event={"ID":"8738cb21-39f9-4eeb-90fc-f512d95642f3","Type":"ContainerStarted","Data":"a2fdc59d0327c0c66c3ba10c85c0cf40992d46cf4893f53cf3095f2ef3e91d72"} Mar 20 07:06:21 crc kubenswrapper[5136]: I0320 07:06:21.566178 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:21 crc kubenswrapper[5136]: I0320 07:06:21.597796 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" podStartSLOduration=1.911725082 podStartE2EDuration="5.597771596s" podCreationTimestamp="2026-03-20 07:06:16 +0000 UTC" firstStartedPulling="2026-03-20 07:06:16.815018973 +0000 UTC m=+1009.074330124" lastFinishedPulling="2026-03-20 07:06:20.501065487 +0000 UTC m=+1012.760376638" observedRunningTime="2026-03-20 07:06:21.586598436 +0000 UTC m=+1013.845909607" watchObservedRunningTime="2026-03-20 07:06:21.597771596 +0000 UTC m=+1013.857082757" Mar 20 07:06:22 crc kubenswrapper[5136]: I0320 07:06:22.571761 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" event={"ID":"f9ad7722-3864-444d-92a1-235de7707fe4","Type":"ContainerStarted","Data":"38d6b2c2a1fc1a785e809cf5e6cb53cc76a1dd5bbc8e37365b9df844dc1ea77d"} Mar 20 07:06:22 crc kubenswrapper[5136]: I0320 07:06:22.592049 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" podStartSLOduration=1.690092353 podStartE2EDuration="6.592024443s" podCreationTimestamp="2026-03-20 07:06:16 +0000 UTC" firstStartedPulling="2026-03-20 07:06:17.182139075 +0000 UTC m=+1009.441450226" lastFinishedPulling="2026-03-20 07:06:22.084071165 +0000 UTC m=+1014.343382316" observedRunningTime="2026-03-20 07:06:22.586236541 +0000 UTC m=+1014.845547692" watchObservedRunningTime="2026-03-20 07:06:22.592024443 +0000 UTC m=+1014.851335594" Mar 20 07:06:23 crc kubenswrapper[5136]: I0320 07:06:23.578067 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:29 crc kubenswrapper[5136]: I0320 07:06:29.120251 5136 scope.go:117] "RemoveContainer" containerID="c700468779627f6961723a07d9133659d892564be897053e621e205bd14c1cbb" Mar 20 07:06:36 crc kubenswrapper[5136]: I0320 07:06:36.937234 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-787f65f959-lkczj" Mar 20 07:06:56 crc kubenswrapper[5136]: I0320 07:06:56.558492 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-76dc698dd8-wkrqn" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.305888 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-bjq5z"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.308195 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.309904 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.310098 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-vlhfz" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.310253 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.316208 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.317070 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.318481 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.332782 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.381876 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-nrftr"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.382999 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385076 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385121 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385428 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.385621 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-ngtp6" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.390917 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-dzzhq"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.391684 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.393058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.404066 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-dzzhq"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495276 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-sockets\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495316 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mt6q\" (UniqueName: \"kubernetes.io/projected/4c981a48-1ae6-4c06-90ed-4333de6a14d2-kube-api-access-9mt6q\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495340 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/037785f1-4827-4473-8997-20cdc8fec776-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495378 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495399 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-conf\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495416 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-metrics-certs\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495433 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-cert\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d54436ca-ad6f-41c2-ae88-703f150229fc-metallb-excludel2\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495465 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk5jk\" (UniqueName: \"kubernetes.io/projected/11c03832-f8fc-4790-98f6-43290c528ce9-kube-api-access-bk5jk\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495487 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6592\" (UniqueName: \"kubernetes.io/projected/d54436ca-ad6f-41c2-ae88-703f150229fc-kube-api-access-n6592\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-metrics-certs\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495518 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-metrics\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wljk9\" (UniqueName: \"kubernetes.io/projected/037785f1-4827-4473-8997-20cdc8fec776-kube-api-access-wljk9\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495565 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-reloader\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495582 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11c03832-f8fc-4790-98f6-43290c528ce9-frr-startup\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.495599 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11c03832-f8fc-4790-98f6-43290c528ce9-metrics-certs\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597068 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wljk9\" (UniqueName: \"kubernetes.io/projected/037785f1-4827-4473-8997-20cdc8fec776-kube-api-access-wljk9\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-reloader\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597155 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11c03832-f8fc-4790-98f6-43290c528ce9-frr-startup\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11c03832-f8fc-4790-98f6-43290c528ce9-metrics-certs\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597256 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-sockets\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597280 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mt6q\" (UniqueName: \"kubernetes.io/projected/4c981a48-1ae6-4c06-90ed-4333de6a14d2-kube-api-access-9mt6q\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597308 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/037785f1-4827-4473-8997-20cdc8fec776-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597390 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-conf\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-metrics-certs\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597441 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-cert\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597485 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d54436ca-ad6f-41c2-ae88-703f150229fc-metallb-excludel2\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk5jk\" (UniqueName: \"kubernetes.io/projected/11c03832-f8fc-4790-98f6-43290c528ce9-kube-api-access-bk5jk\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597533 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6592\" (UniqueName: \"kubernetes.io/projected/d54436ca-ad6f-41c2-ae88-703f150229fc-kube-api-access-n6592\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597555 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-metrics-certs\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.597577 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-metrics\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.598013 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-metrics\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.599479 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-conf\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.599487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-reloader\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: E0320 07:06:57.599573 5136 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 20 07:06:57 crc kubenswrapper[5136]: E0320 07:06:57.599631 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist podName:d54436ca-ad6f-41c2-ae88-703f150229fc nodeName:}" failed. No retries permitted until 2026-03-20 07:06:58.099613481 +0000 UTC m=+1050.358924632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist") pod "speaker-nrftr" (UID: "d54436ca-ad6f-41c2-ae88-703f150229fc") : secret "metallb-memberlist" not found Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.599992 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/11c03832-f8fc-4790-98f6-43290c528ce9-frr-sockets\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.600069 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/d54436ca-ad6f-41c2-ae88-703f150229fc-metallb-excludel2\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.600078 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/11c03832-f8fc-4790-98f6-43290c528ce9-frr-startup\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.604066 5136 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.604722 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/11c03832-f8fc-4790-98f6-43290c528ce9-metrics-certs\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.605924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-metrics-certs\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.618705 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-metrics-certs\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.618892 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mt6q\" (UniqueName: \"kubernetes.io/projected/4c981a48-1ae6-4c06-90ed-4333de6a14d2-kube-api-access-9mt6q\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.620187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4c981a48-1ae6-4c06-90ed-4333de6a14d2-cert\") pod \"controller-7bb4cc7c98-dzzhq\" (UID: \"4c981a48-1ae6-4c06-90ed-4333de6a14d2\") " pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.620507 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wljk9\" (UniqueName: \"kubernetes.io/projected/037785f1-4827-4473-8997-20cdc8fec776-kube-api-access-wljk9\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.621344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/037785f1-4827-4473-8997-20cdc8fec776-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-b8fzm\" (UID: \"037785f1-4827-4473-8997-20cdc8fec776\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.624761 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk5jk\" (UniqueName: \"kubernetes.io/projected/11c03832-f8fc-4790-98f6-43290c528ce9-kube-api-access-bk5jk\") pod \"frr-k8s-bjq5z\" (UID: \"11c03832-f8fc-4790-98f6-43290c528ce9\") " pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.630752 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.635218 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6592\" (UniqueName: \"kubernetes.io/projected/d54436ca-ad6f-41c2-ae88-703f150229fc-kube-api-access-n6592\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.702472 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.840490 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm"] Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.913235 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-dzzhq"] Mar 20 07:06:57 crc kubenswrapper[5136]: W0320 07:06:57.922660 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c981a48_1ae6_4c06_90ed_4333de6a14d2.slice/crio-2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720 WatchSource:0}: Error finding container 2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720: Status 404 returned error can't find the container with id 2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720 Mar 20 07:06:57 crc kubenswrapper[5136]: I0320 07:06:57.924014 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.106272 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.113244 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/d54436ca-ad6f-41c2-ae88-703f150229fc-memberlist\") pod \"speaker-nrftr\" (UID: \"d54436ca-ad6f-41c2-ae88-703f150229fc\") " pod="metallb-system/speaker-nrftr" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.294803 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-nrftr" Mar 20 07:06:58 crc kubenswrapper[5136]: W0320 07:06:58.328142 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd54436ca_ad6f_41c2_ae88_703f150229fc.slice/crio-c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470 WatchSource:0}: Error finding container c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470: Status 404 returned error can't find the container with id c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470 Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.794901 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"d1bad7e1dff2a157c7e1f83d5dd7662fb06eda784212bc4e41086bdc8b3a561a"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.797502 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nrftr" event={"ID":"d54436ca-ad6f-41c2-ae88-703f150229fc","Type":"ContainerStarted","Data":"f8f8da005789d4bddcdf87716453b4f98ecb7198b70703bda4238755c115a4d4"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.797616 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nrftr" event={"ID":"d54436ca-ad6f-41c2-ae88-703f150229fc","Type":"ContainerStarted","Data":"c8f50e75c4d1e4cedf4177fa913c54cc3bcc83d520993de37980264e1843e470"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.801884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dzzhq" event={"ID":"4c981a48-1ae6-4c06-90ed-4333de6a14d2","Type":"ContainerStarted","Data":"fd8677088f487f52a728fb17dafdb51424c7a7676ecce30b8dda96546b601a9e"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.801999 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dzzhq" event={"ID":"4c981a48-1ae6-4c06-90ed-4333de6a14d2","Type":"ContainerStarted","Data":"4f9ee5acf0afc61181e89beadce206d7aeed39bed3421a6eda6dab2fd267dcfd"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.802091 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.802149 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-dzzhq" event={"ID":"4c981a48-1ae6-4c06-90ed-4333de6a14d2","Type":"ContainerStarted","Data":"2c416ff9cd1d0533022eaeb0f20230c869f23068c427cf745763c3ff4bf61720"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.803364 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" event={"ID":"037785f1-4827-4473-8997-20cdc8fec776","Type":"ContainerStarted","Data":"7f23eb36bd47279e94a79c45a67e3d78e97ba659682ca797a1ccbda5dc37c90c"} Mar 20 07:06:58 crc kubenswrapper[5136]: I0320 07:06:58.822124 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-dzzhq" podStartSLOduration=1.822104771 podStartE2EDuration="1.822104771s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:06:58.819051358 +0000 UTC m=+1051.078362519" watchObservedRunningTime="2026-03-20 07:06:58.822104771 +0000 UTC m=+1051.081415922" Mar 20 07:06:59 crc kubenswrapper[5136]: I0320 07:06:59.815307 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-nrftr" event={"ID":"d54436ca-ad6f-41c2-ae88-703f150229fc","Type":"ContainerStarted","Data":"df5da50f2e79e1e5b420a5036d3e271000c056e41b9cbbd03c2046fd8d04a825"} Mar 20 07:06:59 crc kubenswrapper[5136]: I0320 07:06:59.834514 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-nrftr" podStartSLOduration=2.834497656 podStartE2EDuration="2.834497656s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:06:59.832450234 +0000 UTC m=+1052.091761395" watchObservedRunningTime="2026-03-20 07:06:59.834497656 +0000 UTC m=+1052.093808807" Mar 20 07:07:00 crc kubenswrapper[5136]: I0320 07:07:00.829384 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-nrftr" Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.864674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" event={"ID":"037785f1-4827-4473-8997-20cdc8fec776","Type":"ContainerStarted","Data":"a5627982b25a7799bbad056028bfa166a9488e5ecdb741bbeba842c695a2a028"} Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.865303 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.866455 5136 generic.go:334] "Generic (PLEG): container finished" podID="11c03832-f8fc-4790-98f6-43290c528ce9" containerID="2e5a2f302966bf2ecc92b2e2219e62c994619babb2dfc81c294b39e99d06df05" exitCode=0 Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.866495 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerDied","Data":"2e5a2f302966bf2ecc92b2e2219e62c994619babb2dfc81c294b39e99d06df05"} Mar 20 07:07:04 crc kubenswrapper[5136]: I0320 07:07:04.885539 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" podStartSLOduration=1.292712709 podStartE2EDuration="7.885523681s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="2026-03-20 07:06:57.849283049 +0000 UTC m=+1050.108594200" lastFinishedPulling="2026-03-20 07:07:04.442094011 +0000 UTC m=+1056.701405172" observedRunningTime="2026-03-20 07:07:04.883766246 +0000 UTC m=+1057.143077397" watchObservedRunningTime="2026-03-20 07:07:04.885523681 +0000 UTC m=+1057.144834832" Mar 20 07:07:05 crc kubenswrapper[5136]: I0320 07:07:05.876105 5136 generic.go:334] "Generic (PLEG): container finished" podID="11c03832-f8fc-4790-98f6-43290c528ce9" containerID="c23405a6620278bd98783ee0b6d08e662006e28c393ebacf86397ec5a67ff1b4" exitCode=0 Mar 20 07:07:05 crc kubenswrapper[5136]: I0320 07:07:05.876196 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerDied","Data":"c23405a6620278bd98783ee0b6d08e662006e28c393ebacf86397ec5a67ff1b4"} Mar 20 07:07:06 crc kubenswrapper[5136]: I0320 07:07:06.883597 5136 generic.go:334] "Generic (PLEG): container finished" podID="11c03832-f8fc-4790-98f6-43290c528ce9" containerID="edab9a7cc4f60cec2c70af4613523b56e8a422eee050fd0ffe3ec77d73cc80b3" exitCode=0 Mar 20 07:07:06 crc kubenswrapper[5136]: I0320 07:07:06.883688 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerDied","Data":"edab9a7cc4f60cec2c70af4613523b56e8a422eee050fd0ffe3ec77d73cc80b3"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893450 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"bb68198ed84c2ab2ca3d4249c1e11ff36069ff3fff87bc27cb727a5a72b6cdac"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"88100f63a82b09bb00c22aaf406a57cfa5b991c4dc461d24046d6363a0d32834"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893775 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"f668e9edc46de0b468685a628a4ead5b0066b252e9cd5ffb367decb7478dc302"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"5a91ca9db460c4bdfa4411064eb8b22c0a8f917f0af7ea9a88dde128a9114094"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893792 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"772f401239653dc5968b39b347b76c2db1a0389dad4bbfe1ae37787eda7efbce"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.893801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-bjq5z" event={"ID":"11c03832-f8fc-4790-98f6-43290c528ce9","Type":"ContainerStarted","Data":"96a2ca0fb38c42657d88bc0b7fc1431bd88f07f62a1ee6fb97388eeef243651c"} Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.919347 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-bjq5z" podStartSLOduration=4.534445887 podStartE2EDuration="10.919327939s" podCreationTimestamp="2026-03-20 07:06:57 +0000 UTC" firstStartedPulling="2026-03-20 07:06:58.031481664 +0000 UTC m=+1050.290792815" lastFinishedPulling="2026-03-20 07:07:04.416363676 +0000 UTC m=+1056.675674867" observedRunningTime="2026-03-20 07:07:07.913754277 +0000 UTC m=+1060.173065438" watchObservedRunningTime="2026-03-20 07:07:07.919327939 +0000 UTC m=+1060.178639110" Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.925229 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:07 crc kubenswrapper[5136]: I0320 07:07:07.965470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:08 crc kubenswrapper[5136]: I0320 07:07:08.300767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-nrftr" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.880128 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl"] Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.881869 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.884556 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.894440 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl"] Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.976369 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.976455 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:09 crc kubenswrapper[5136]: I0320 07:07:09.976540 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.077623 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.077681 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.077751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.078252 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.078579 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.096860 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.201045 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.630531 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl"] Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.916885 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerID="9d4915209e9cd1aee15f5e7887b40473508d7888446113ef0a2ff2d007c2cb14" exitCode=0 Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.916981 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"9d4915209e9cd1aee15f5e7887b40473508d7888446113ef0a2ff2d007c2cb14"} Mar 20 07:07:10 crc kubenswrapper[5136]: I0320 07:07:10.917255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerStarted","Data":"f1ded7a06cccfe2e68a9941c8a9b7358062eb8ecfb184de223c411b491e42ca8"} Mar 20 07:07:15 crc kubenswrapper[5136]: I0320 07:07:15.951511 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerID="2d839fb43f25dbf8d31e7a6535aafda786a1cab3db5f18c1caf2b0163209f4f0" exitCode=0 Mar 20 07:07:15 crc kubenswrapper[5136]: I0320 07:07:15.951620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"2d839fb43f25dbf8d31e7a6535aafda786a1cab3db5f18c1caf2b0163209f4f0"} Mar 20 07:07:16 crc kubenswrapper[5136]: I0320 07:07:16.960783 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerID="35ff8e67e1e936a1ca29c0207ad2c78e44c549b9363c2a0fedfee843535c8849" exitCode=0 Mar 20 07:07:16 crc kubenswrapper[5136]: I0320 07:07:16.960886 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"35ff8e67e1e936a1ca29c0207ad2c78e44c549b9363c2a0fedfee843535c8849"} Mar 20 07:07:17 crc kubenswrapper[5136]: I0320 07:07:17.636749 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-b8fzm" Mar 20 07:07:17 crc kubenswrapper[5136]: I0320 07:07:17.706427 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-dzzhq" Mar 20 07:07:17 crc kubenswrapper[5136]: I0320 07:07:17.932770 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-bjq5z" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.330047 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.492965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") pod \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.493063 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") pod \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.493083 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") pod \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\" (UID: \"bd4f9716-cbae-44b8-ba7a-44aaa92dae66\") " Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.494508 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle" (OuterVolumeSpecName: "bundle") pod "bd4f9716-cbae-44b8-ba7a-44aaa92dae66" (UID: "bd4f9716-cbae-44b8-ba7a-44aaa92dae66"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.498691 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd" (OuterVolumeSpecName: "kube-api-access-2lcsd") pod "bd4f9716-cbae-44b8-ba7a-44aaa92dae66" (UID: "bd4f9716-cbae-44b8-ba7a-44aaa92dae66"). InnerVolumeSpecName "kube-api-access-2lcsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.507575 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util" (OuterVolumeSpecName: "util") pod "bd4f9716-cbae-44b8-ba7a-44aaa92dae66" (UID: "bd4f9716-cbae-44b8-ba7a-44aaa92dae66"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.595839 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.595885 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.595903 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lcsd\" (UniqueName: \"kubernetes.io/projected/bd4f9716-cbae-44b8-ba7a-44aaa92dae66-kube-api-access-2lcsd\") on node \"crc\" DevicePath \"\"" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.978910 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" event={"ID":"bd4f9716-cbae-44b8-ba7a-44aaa92dae66","Type":"ContainerDied","Data":"f1ded7a06cccfe2e68a9941c8a9b7358062eb8ecfb184de223c411b491e42ca8"} Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.978944 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1ded7a06cccfe2e68a9941c8a9b7358062eb8ecfb184de223c411b491e42ca8" Mar 20 07:07:18 crc kubenswrapper[5136]: I0320 07:07:18.979003 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.258500 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb"] Mar 20 07:07:23 crc kubenswrapper[5136]: E0320 07:07:23.259247 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="pull" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259261 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="pull" Mar 20 07:07:23 crc kubenswrapper[5136]: E0320 07:07:23.259275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="extract" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259281 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="extract" Mar 20 07:07:23 crc kubenswrapper[5136]: E0320 07:07:23.259293 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="util" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259301 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="util" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259417 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd4f9716-cbae-44b8-ba7a-44aaa92dae66" containerName="extract" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.259883 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.264289 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.265319 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.266097 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-xrv2l" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.278558 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb"] Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.455739 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzbgw\" (UniqueName: \"kubernetes.io/projected/ab58c510-da95-4ce8-855c-f58a8f46c61d-kube-api-access-zzbgw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.455798 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab58c510-da95-4ce8-855c-f58a8f46c61d-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.557314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzbgw\" (UniqueName: \"kubernetes.io/projected/ab58c510-da95-4ce8-855c-f58a8f46c61d-kube-api-access-zzbgw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.557374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab58c510-da95-4ce8-855c-f58a8f46c61d-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.557994 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/ab58c510-da95-4ce8-855c-f58a8f46c61d-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.595685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzbgw\" (UniqueName: \"kubernetes.io/projected/ab58c510-da95-4ce8-855c-f58a8f46c61d-kube-api-access-zzbgw\") pod \"cert-manager-operator-controller-manager-66c8bdd694-tb4jb\" (UID: \"ab58c510-da95-4ce8-855c-f58a8f46c61d\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:23 crc kubenswrapper[5136]: I0320 07:07:23.881501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" Mar 20 07:07:24 crc kubenswrapper[5136]: I0320 07:07:24.394142 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb"] Mar 20 07:07:24 crc kubenswrapper[5136]: W0320 07:07:24.399896 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab58c510_da95_4ce8_855c_f58a8f46c61d.slice/crio-23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b WatchSource:0}: Error finding container 23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b: Status 404 returned error can't find the container with id 23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b Mar 20 07:07:25 crc kubenswrapper[5136]: I0320 07:07:25.026958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" event={"ID":"ab58c510-da95-4ce8-855c-f58a8f46c61d","Type":"ContainerStarted","Data":"23fc4890c9842e18773a8c1937e803432ff7ccceba82dacbd577c0107e92a75b"} Mar 20 07:07:28 crc kubenswrapper[5136]: I0320 07:07:28.044291 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" event={"ID":"ab58c510-da95-4ce8-855c-f58a8f46c61d","Type":"ContainerStarted","Data":"9cc29c3201590a834800efd0e8e6097e0f38b6c636c8bcc98e7baa68724f2b66"} Mar 20 07:07:28 crc kubenswrapper[5136]: I0320 07:07:28.062093 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-tb4jb" podStartSLOduration=2.255530209 podStartE2EDuration="5.062076941s" podCreationTimestamp="2026-03-20 07:07:23 +0000 UTC" firstStartedPulling="2026-03-20 07:07:24.403530316 +0000 UTC m=+1076.662841467" lastFinishedPulling="2026-03-20 07:07:27.210077048 +0000 UTC m=+1079.469388199" observedRunningTime="2026-03-20 07:07:28.05977474 +0000 UTC m=+1080.319085891" watchObservedRunningTime="2026-03-20 07:07:28.062076941 +0000 UTC m=+1080.321388092" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.308933 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4l568"] Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.310086 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.312097 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.314067 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-qnqnl" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.315992 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.319587 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4l568"] Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.502785 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svhpg\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-kube-api-access-svhpg\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.502838 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.604642 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svhpg\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-kube-api-access-svhpg\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.604700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.627194 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.640734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svhpg\" (UniqueName: \"kubernetes.io/projected/6168deec-ad68-4f6d-9736-422a6c7ade08-kube-api-access-svhpg\") pod \"cert-manager-webhook-6888856db4-4l568\" (UID: \"6168deec-ad68-4f6d-9736-422a6c7ade08\") " pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:31 crc kubenswrapper[5136]: I0320 07:07:31.926663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.260168 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4757p"] Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.261291 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.265610 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-tnttm" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.270561 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4757p"] Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.323647 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.323693 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jm9fc\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-kube-api-access-jm9fc\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.365635 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-4l568"] Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.424634 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.424694 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jm9fc\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-kube-api-access-jm9fc\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.452144 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jm9fc\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-kube-api-access-jm9fc\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.452879 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f1c160ca-0866-46ab-859c-8557dc65e962-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-4757p\" (UID: \"f1c160ca-0866-46ab-859c-8557dc65e962\") " pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:32 crc kubenswrapper[5136]: I0320 07:07:32.619387 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" Mar 20 07:07:33 crc kubenswrapper[5136]: I0320 07:07:33.026143 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-4757p"] Mar 20 07:07:33 crc kubenswrapper[5136]: I0320 07:07:33.074210 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" event={"ID":"f1c160ca-0866-46ab-859c-8557dc65e962","Type":"ContainerStarted","Data":"78cb965da31a3740eabcdcfe3c1cc27b79d2696623d6f7cdda2bdadbaad9c1d3"} Mar 20 07:07:33 crc kubenswrapper[5136]: I0320 07:07:33.075354 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" event={"ID":"6168deec-ad68-4f6d-9736-422a6c7ade08","Type":"ContainerStarted","Data":"3a3543afc12e100f2f81c758885cdf492960ee9ecd5540d0f6fd5b73ce99e7b9"} Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.103259 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" event={"ID":"f1c160ca-0866-46ab-859c-8557dc65e962","Type":"ContainerStarted","Data":"0f96c13e08bb5e1ff2b805d043523072db3db1bc56d61c29ddb216130b808e62"} Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.104587 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" event={"ID":"6168deec-ad68-4f6d-9736-422a6c7ade08","Type":"ContainerStarted","Data":"4ae7f07e7e6afff2c58b21e404effc6408c5c208755b778f164e2863559bb61c"} Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.104761 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.121387 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-4757p" podStartSLOduration=1.823097585 podStartE2EDuration="5.121370108s" podCreationTimestamp="2026-03-20 07:07:32 +0000 UTC" firstStartedPulling="2026-03-20 07:07:33.034718456 +0000 UTC m=+1085.294029607" lastFinishedPulling="2026-03-20 07:07:36.332990979 +0000 UTC m=+1088.592302130" observedRunningTime="2026-03-20 07:07:37.120862262 +0000 UTC m=+1089.380173433" watchObservedRunningTime="2026-03-20 07:07:37.121370108 +0000 UTC m=+1089.380681259" Mar 20 07:07:37 crc kubenswrapper[5136]: I0320 07:07:37.144179 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" podStartSLOduration=2.182735645 podStartE2EDuration="6.144154691s" podCreationTimestamp="2026-03-20 07:07:31 +0000 UTC" firstStartedPulling="2026-03-20 07:07:32.370089337 +0000 UTC m=+1084.629400488" lastFinishedPulling="2026-03-20 07:07:36.331508373 +0000 UTC m=+1088.590819534" observedRunningTime="2026-03-20 07:07:37.139326982 +0000 UTC m=+1089.398638133" watchObservedRunningTime="2026-03-20 07:07:37.144154691 +0000 UTC m=+1089.403465842" Mar 20 07:07:41 crc kubenswrapper[5136]: I0320 07:07:41.928777 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-4l568" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.716417 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-d4w65"] Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.717545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.722405 5136 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-n7tnn" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.724776 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-d4w65"] Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.796282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v99bp\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-kube-api-access-v99bp\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.796401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-bound-sa-token\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.897152 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-bound-sa-token\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.897220 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v99bp\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-kube-api-access-v99bp\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.916293 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-bound-sa-token\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:43 crc kubenswrapper[5136]: I0320 07:07:43.916777 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v99bp\" (UniqueName: \"kubernetes.io/projected/b06e6b2d-fcba-4ba1-9ba1-82585032b382-kube-api-access-v99bp\") pod \"cert-manager-545d4d4674-d4w65\" (UID: \"b06e6b2d-fcba-4ba1-9ba1-82585032b382\") " pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:44 crc kubenswrapper[5136]: I0320 07:07:44.053959 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-d4w65" Mar 20 07:07:44 crc kubenswrapper[5136]: I0320 07:07:44.519567 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-d4w65"] Mar 20 07:07:44 crc kubenswrapper[5136]: W0320 07:07:44.523753 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb06e6b2d_fcba_4ba1_9ba1_82585032b382.slice/crio-6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289 WatchSource:0}: Error finding container 6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289: Status 404 returned error can't find the container with id 6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289 Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.154584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-d4w65" event={"ID":"b06e6b2d-fcba-4ba1-9ba1-82585032b382","Type":"ContainerStarted","Data":"de4f720443c1f8e7a14ba9cdddc4b7f519cf83df3494b597cdcd2945e3271fea"} Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.154656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-d4w65" event={"ID":"b06e6b2d-fcba-4ba1-9ba1-82585032b382","Type":"ContainerStarted","Data":"6eb4011f78ae045d59af74438cab59981227263fa2accee84d4a320d6f6de289"} Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.173398 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-d4w65" podStartSLOduration=2.173371997 podStartE2EDuration="2.173371997s" podCreationTimestamp="2026-03-20 07:07:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:07:45.171791548 +0000 UTC m=+1097.431102759" watchObservedRunningTime="2026-03-20 07:07:45.173371997 +0000 UTC m=+1097.432683158" Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.822065 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:07:45 crc kubenswrapper[5136]: I0320 07:07:45.822417 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.770117 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.772443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.778327 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.778849 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-xnckp" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.779584 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.797030 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:55 crc kubenswrapper[5136]: I0320 07:07:55.965989 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"openstack-operator-index-hlbhx\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.066733 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"openstack-operator-index-hlbhx\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.088490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"openstack-operator-index-hlbhx\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.122107 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:07:56 crc kubenswrapper[5136]: I0320 07:07:56.557387 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:56 crc kubenswrapper[5136]: W0320 07:07:56.557670 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b5c9582_34cd_4c36_9ed2_7d1ed6fbc746.slice/crio-e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818 WatchSource:0}: Error finding container e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818: Status 404 returned error can't find the container with id e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818 Mar 20 07:07:57 crc kubenswrapper[5136]: I0320 07:07:57.253796 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerStarted","Data":"e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818"} Mar 20 07:07:58 crc kubenswrapper[5136]: I0320 07:07:58.265148 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerStarted","Data":"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0"} Mar 20 07:07:58 crc kubenswrapper[5136]: I0320 07:07:58.287112 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-hlbhx" podStartSLOduration=2.464778606 podStartE2EDuration="3.287084291s" podCreationTimestamp="2026-03-20 07:07:55 +0000 UTC" firstStartedPulling="2026-03-20 07:07:56.559861989 +0000 UTC m=+1108.819173150" lastFinishedPulling="2026-03-20 07:07:57.382167684 +0000 UTC m=+1109.641478835" observedRunningTime="2026-03-20 07:07:58.285490261 +0000 UTC m=+1110.544801432" watchObservedRunningTime="2026-03-20 07:07:58.287084291 +0000 UTC m=+1110.546395482" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.136713 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.735612 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-w8k22"] Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.736513 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.743709 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm7rx\" (UniqueName: \"kubernetes.io/projected/4c933e5d-73ac-4820-a31c-e1d5cc5bcae0-kube-api-access-nm7rx\") pod \"openstack-operator-index-w8k22\" (UID: \"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0\") " pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.762457 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w8k22"] Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.845146 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nm7rx\" (UniqueName: \"kubernetes.io/projected/4c933e5d-73ac-4820-a31c-e1d5cc5bcae0-kube-api-access-nm7rx\") pod \"openstack-operator-index-w8k22\" (UID: \"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0\") " pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:07:59 crc kubenswrapper[5136]: I0320 07:07:59.884978 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nm7rx\" (UniqueName: \"kubernetes.io/projected/4c933e5d-73ac-4820-a31c-e1d5cc5bcae0-kube-api-access-nm7rx\") pod \"openstack-operator-index-w8k22\" (UID: \"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0\") " pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.061073 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.139661 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.140417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.146495 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.146490 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.147113 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.154241 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"auto-csr-approver-29566508-v874c\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.162568 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.256508 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"auto-csr-approver-29566508-v874c\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.273565 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"auto-csr-approver-29566508-v874c\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.278109 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-hlbhx" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" containerID="cri-o://5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" gracePeriod=2 Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.463536 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.530135 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-w8k22"] Mar 20 07:08:00 crc kubenswrapper[5136]: W0320 07:08:00.539477 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c933e5d_73ac_4820_a31c_e1d5cc5bcae0.slice/crio-76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd WatchSource:0}: Error finding container 76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd: Status 404 returned error can't find the container with id 76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.649954 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.661484 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") pod \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\" (UID: \"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746\") " Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.676889 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t" (OuterVolumeSpecName: "kube-api-access-9587t") pod "5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" (UID: "5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746"). InnerVolumeSpecName "kube-api-access-9587t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.763153 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9587t\" (UniqueName: \"kubernetes.io/projected/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746-kube-api-access-9587t\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:00 crc kubenswrapper[5136]: I0320 07:08:00.918910 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:08:00 crc kubenswrapper[5136]: W0320 07:08:00.921911 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9cf4346_e624_476e_b04c_43b35e0a83cd.slice/crio-a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad WatchSource:0}: Error finding container a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad: Status 404 returned error can't find the container with id a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.284617 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8k22" event={"ID":"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0","Type":"ContainerStarted","Data":"15852bfb046cc7c5e9594a43cfd4671a58abbaf60500c8a588c4151fa7ac4ca8"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.285447 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-w8k22" event={"ID":"4c933e5d-73ac-4820-a31c-e1d5cc5bcae0","Type":"ContainerStarted","Data":"76ce34c505ebf194ed7fc6eda04ab8e3aa055d84f3be666cb76a2f116c0418fd"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.286911 5136 generic.go:334] "Generic (PLEG): container finished" podID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" exitCode=0 Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.286974 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerDied","Data":"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.287000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-hlbhx" event={"ID":"5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746","Type":"ContainerDied","Data":"e920f18bf97664feae1196e8d874ec99bff74c30b302796caebdbd276e026818"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.287026 5136 scope.go:117] "RemoveContainer" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.287207 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-hlbhx" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.294740 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerStarted","Data":"a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad"} Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.312577 5136 scope.go:117] "RemoveContainer" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.312844 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-w8k22" podStartSLOduration=1.8229884790000002 podStartE2EDuration="2.31280247s" podCreationTimestamp="2026-03-20 07:07:59 +0000 UTC" firstStartedPulling="2026-03-20 07:08:00.54513067 +0000 UTC m=+1112.804441831" lastFinishedPulling="2026-03-20 07:08:01.034944631 +0000 UTC m=+1113.294255822" observedRunningTime="2026-03-20 07:08:01.303066499 +0000 UTC m=+1113.562377680" watchObservedRunningTime="2026-03-20 07:08:01.31280247 +0000 UTC m=+1113.572113641" Mar 20 07:08:01 crc kubenswrapper[5136]: E0320 07:08:01.313563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0\": container with ID starting with 5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0 not found: ID does not exist" containerID="5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.313609 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0"} err="failed to get container status \"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0\": rpc error: code = NotFound desc = could not find container \"5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0\": container with ID starting with 5167ea6bbf2199c0e60c90f86e04adb0a013e8245f41a8fd1450e8239f11a9b0 not found: ID does not exist" Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.332677 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:08:01 crc kubenswrapper[5136]: I0320 07:08:01.338076 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-hlbhx"] Mar 20 07:08:02 crc kubenswrapper[5136]: I0320 07:08:02.302454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerStarted","Data":"dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72"} Mar 20 07:08:02 crc kubenswrapper[5136]: I0320 07:08:02.315692 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566508-v874c" podStartSLOduration=1.230053305 podStartE2EDuration="2.31567662s" podCreationTimestamp="2026-03-20 07:08:00 +0000 UTC" firstStartedPulling="2026-03-20 07:08:00.925212094 +0000 UTC m=+1113.184523275" lastFinishedPulling="2026-03-20 07:08:02.010835439 +0000 UTC m=+1114.270146590" observedRunningTime="2026-03-20 07:08:02.313230225 +0000 UTC m=+1114.572541376" watchObservedRunningTime="2026-03-20 07:08:02.31567662 +0000 UTC m=+1114.574987771" Mar 20 07:08:02 crc kubenswrapper[5136]: I0320 07:08:02.405758 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" path="/var/lib/kubelet/pods/5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746/volumes" Mar 20 07:08:03 crc kubenswrapper[5136]: I0320 07:08:03.308475 5136 generic.go:334] "Generic (PLEG): container finished" podID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerID="dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72" exitCode=0 Mar 20 07:08:03 crc kubenswrapper[5136]: I0320 07:08:03.308527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerDied","Data":"dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72"} Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.530862 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.712608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") pod \"f9cf4346-e624-476e-b04c-43b35e0a83cd\" (UID: \"f9cf4346-e624-476e-b04c-43b35e0a83cd\") " Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.718358 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb" (OuterVolumeSpecName: "kube-api-access-zqrzb") pod "f9cf4346-e624-476e-b04c-43b35e0a83cd" (UID: "f9cf4346-e624-476e-b04c-43b35e0a83cd"). InnerVolumeSpecName "kube-api-access-zqrzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:08:04 crc kubenswrapper[5136]: I0320 07:08:04.814082 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqrzb\" (UniqueName: \"kubernetes.io/projected/f9cf4346-e624-476e-b04c-43b35e0a83cd-kube-api-access-zqrzb\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.322032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566508-v874c" event={"ID":"f9cf4346-e624-476e-b04c-43b35e0a83cd","Type":"ContainerDied","Data":"a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad"} Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.322071 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1037a25f726e7e8612ae2b1c969164c9045463f2c539c4db873d7f4119d29ad" Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.322143 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566508-v874c" Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.379006 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:08:05 crc kubenswrapper[5136]: I0320 07:08:05.385418 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566502-5gzjz"] Mar 20 07:08:06 crc kubenswrapper[5136]: I0320 07:08:06.404802 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7239b4f-11f6-4f5c-8d78-c233e33b8a79" path="/var/lib/kubelet/pods/a7239b4f-11f6-4f5c-8d78-c233e33b8a79/volumes" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.061456 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.061724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.088207 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:10 crc kubenswrapper[5136]: I0320 07:08:10.380341 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-w8k22" Mar 20 07:08:15 crc kubenswrapper[5136]: I0320 07:08:15.822432 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:08:15 crc kubenswrapper[5136]: I0320 07:08:15.824053 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975236 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz"] Mar 20 07:08:16 crc kubenswrapper[5136]: E0320 07:08:16.975519 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975534 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" Mar 20 07:08:16 crc kubenswrapper[5136]: E0320 07:08:16.975550 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerName="oc" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975558 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerName="oc" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975701 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" containerName="oc" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.975717 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b5c9582-34cd-4c36-9ed2-7d1ed6fbc746" containerName="registry-server" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.976656 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.978829 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nm92r" Mar 20 07:08:16 crc kubenswrapper[5136]: I0320 07:08:16.989506 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz"] Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.081387 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.081480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.081506 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183182 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183300 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183724 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.183958 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.209862 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.294545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:17 crc kubenswrapper[5136]: I0320 07:08:17.713982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz"] Mar 20 07:08:18 crc kubenswrapper[5136]: I0320 07:08:18.419118 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerID="be947799277d738c82c9a3ce13ea1c74b6510ee0bdc70c2f09934722f7f1c708" exitCode=0 Mar 20 07:08:18 crc kubenswrapper[5136]: I0320 07:08:18.419465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"be947799277d738c82c9a3ce13ea1c74b6510ee0bdc70c2f09934722f7f1c708"} Mar 20 07:08:18 crc kubenswrapper[5136]: I0320 07:08:18.420341 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerStarted","Data":"1aa8cf30f16c68898a28fa61c71e7a50e7e5107abfa611b214d418fe7fdbe7f3"} Mar 20 07:08:19 crc kubenswrapper[5136]: I0320 07:08:19.428115 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerID="48752fb79ecfd378cd8c2169459d2c3bfa0b4e11636bc18a30a2688ca61ee6dd" exitCode=0 Mar 20 07:08:19 crc kubenswrapper[5136]: I0320 07:08:19.428284 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"48752fb79ecfd378cd8c2169459d2c3bfa0b4e11636bc18a30a2688ca61ee6dd"} Mar 20 07:08:20 crc kubenswrapper[5136]: I0320 07:08:20.435093 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerID="8e105ac4c123f8aa7db72a80a061a758c4c1bc2c44e0fcc4b2aaadb3a0ef3800" exitCode=0 Mar 20 07:08:20 crc kubenswrapper[5136]: I0320 07:08:20.435132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"8e105ac4c123f8aa7db72a80a061a758c4c1bc2c44e0fcc4b2aaadb3a0ef3800"} Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.710879 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.745125 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") pod \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.745195 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") pod \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.745265 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") pod \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\" (UID: \"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7\") " Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.746033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle" (OuterVolumeSpecName: "bundle") pod "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" (UID: "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.762052 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util" (OuterVolumeSpecName: "util") pod "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" (UID: "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.762285 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz" (OuterVolumeSpecName: "kube-api-access-ch5wz") pod "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" (UID: "6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7"). InnerVolumeSpecName "kube-api-access-ch5wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.846571 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch5wz\" (UniqueName: \"kubernetes.io/projected/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-kube-api-access-ch5wz\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.846604 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:21 crc kubenswrapper[5136]: I0320 07:08:21.846614 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7-util\") on node \"crc\" DevicePath \"\"" Mar 20 07:08:22 crc kubenswrapper[5136]: I0320 07:08:22.452868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" event={"ID":"6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7","Type":"ContainerDied","Data":"1aa8cf30f16c68898a28fa61c71e7a50e7e5107abfa611b214d418fe7fdbe7f3"} Mar 20 07:08:22 crc kubenswrapper[5136]: I0320 07:08:22.452904 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1aa8cf30f16c68898a28fa61c71e7a50e7e5107abfa611b214d418fe7fdbe7f3" Mar 20 07:08:22 crc kubenswrapper[5136]: I0320 07:08:22.452966 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.014477 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc"] Mar 20 07:08:29 crc kubenswrapper[5136]: E0320 07:08:29.015660 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="extract" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015682 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="extract" Mar 20 07:08:29 crc kubenswrapper[5136]: E0320 07:08:29.015701 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="pull" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015713 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="pull" Mar 20 07:08:29 crc kubenswrapper[5136]: E0320 07:08:29.015739 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="util" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015750 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="util" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.015983 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7" containerName="extract" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.016531 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.020733 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-57h46" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.039733 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc"] Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.136392 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln82h\" (UniqueName: \"kubernetes.io/projected/eb51f1ec-5289-4291-8334-0149c355adac-kube-api-access-ln82h\") pod \"openstack-operator-controller-init-b85c4d696-xv6qc\" (UID: \"eb51f1ec-5289-4291-8334-0149c355adac\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.219090 5136 scope.go:117] "RemoveContainer" containerID="ec7cb3f6c1f148e1e156127b9c7522e3ead66e4d7bc579e04406415291cd9efb" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.238205 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln82h\" (UniqueName: \"kubernetes.io/projected/eb51f1ec-5289-4291-8334-0149c355adac-kube-api-access-ln82h\") pod \"openstack-operator-controller-init-b85c4d696-xv6qc\" (UID: \"eb51f1ec-5289-4291-8334-0149c355adac\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.266569 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln82h\" (UniqueName: \"kubernetes.io/projected/eb51f1ec-5289-4291-8334-0149c355adac-kube-api-access-ln82h\") pod \"openstack-operator-controller-init-b85c4d696-xv6qc\" (UID: \"eb51f1ec-5289-4291-8334-0149c355adac\") " pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.336719 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:29 crc kubenswrapper[5136]: I0320 07:08:29.735894 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc"] Mar 20 07:08:30 crc kubenswrapper[5136]: I0320 07:08:30.511843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" event={"ID":"eb51f1ec-5289-4291-8334-0149c355adac","Type":"ContainerStarted","Data":"afe31b1604b676a82af0e66b45b5d805506271dd9ca6e14b7e97d51f34ce6ed2"} Mar 20 07:08:34 crc kubenswrapper[5136]: I0320 07:08:34.539757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" event={"ID":"eb51f1ec-5289-4291-8334-0149c355adac","Type":"ContainerStarted","Data":"af77bcd1ac1563ca59462c1ee12868372a65be7aafdf02f4d45b128401820154"} Mar 20 07:08:34 crc kubenswrapper[5136]: I0320 07:08:34.541007 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:39 crc kubenswrapper[5136]: I0320 07:08:39.340222 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" Mar 20 07:08:39 crc kubenswrapper[5136]: I0320 07:08:39.388063 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-b85c4d696-xv6qc" podStartSLOduration=7.676016652 podStartE2EDuration="11.388039969s" podCreationTimestamp="2026-03-20 07:08:28 +0000 UTC" firstStartedPulling="2026-03-20 07:08:29.74401522 +0000 UTC m=+1142.003326371" lastFinishedPulling="2026-03-20 07:08:33.456038537 +0000 UTC m=+1145.715349688" observedRunningTime="2026-03-20 07:08:34.573595048 +0000 UTC m=+1146.832906189" watchObservedRunningTime="2026-03-20 07:08:39.388039969 +0000 UTC m=+1151.647351130" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.822254 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.822763 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.822885 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.823987 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:08:45 crc kubenswrapper[5136]: I0320 07:08:45.824127 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f" gracePeriod=600 Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635152 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f" exitCode=0 Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635262 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f"} Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635880 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f"} Mar 20 07:08:46 crc kubenswrapper[5136]: I0320 07:08:46.635910 5136 scope.go:117] "RemoveContainer" containerID="efc1d8deaa7f1e1d784e8be4e8d258b13fb86298f9c0df94ee4191513f62ba52" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.787048 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.788343 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.789688 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-n765f" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.799577 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.811095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.811914 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.813451 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qsfdb" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.818292 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.819260 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.832273 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-tfqrt" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.845267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.863355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.878018 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.879000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.880675 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ldctn" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.905264 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.920624 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.921438 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.923330 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-k68ck" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.925174 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.932689 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.933493 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.939160 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-tfmmz" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.940500 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.942289 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.943417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.946118 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-4g7qk" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.946827 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgk5j\" (UniqueName: \"kubernetes.io/projected/95dfc6ea-897c-4133-ab1e-cefc81ab0623-kube-api-access-sgk5j\") pod \"cinder-operator-controller-manager-8d58dc466-g62fh\" (UID: \"95dfc6ea-897c-4133-ab1e-cefc81ab0623\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.946865 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r999q\" (UniqueName: \"kubernetes.io/projected/0454e048-0e5f-454d-a341-627512f745b9-kube-api-access-r999q\") pod \"designate-operator-controller-manager-588d4d986b-nzs5m\" (UID: \"0454e048-0e5f-454d-a341-627512f745b9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.947385 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvh4t\" (UniqueName: \"kubernetes.io/projected/86f2c200-3fc8-4ff8-abbd-4e9196951c84-kube-api-access-gvh4t\") pod \"barbican-operator-controller-manager-59bc569d95-5lz5s\" (UID: \"86f2c200-3fc8-4ff8-abbd-4e9196951c84\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.949118 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj"] Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.973511 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.982174 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7crl6" Mar 20 07:09:17 crc kubenswrapper[5136]: I0320 07:09:17.986079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.022101 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.026990 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.036311 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.036448 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnhqc\" (UniqueName: \"kubernetes.io/projected/8035ac49-bf5e-4c7a-801a-2e0a9acdbec8-kube-api-access-lnhqc\") pod \"ironic-operator-controller-manager-6f787dddc9-cvwqk\" (UID: \"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048252 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-947qm\" (UniqueName: \"kubernetes.io/projected/d9bea0a5-4e0c-4eec-8c57-465238459ec5-kube-api-access-947qm\") pod \"glance-operator-controller-manager-79df6bcc97-4zc57\" (UID: \"d9bea0a5-4e0c-4eec-8c57-465238459ec5\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048288 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx4t2\" (UniqueName: \"kubernetes.io/projected/ce8f650c-1729-4d5d-ae70-6cefed6ebe33-kube-api-access-dx4t2\") pod \"horizon-operator-controller-manager-8464cc45fb-jqkmw\" (UID: \"ce8f650c-1729-4d5d-ae70-6cefed6ebe33\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gvh4t\" (UniqueName: \"kubernetes.io/projected/86f2c200-3fc8-4ff8-abbd-4e9196951c84-kube-api-access-gvh4t\") pod \"barbican-operator-controller-manager-59bc569d95-5lz5s\" (UID: \"86f2c200-3fc8-4ff8-abbd-4e9196951c84\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgk5j\" (UniqueName: \"kubernetes.io/projected/95dfc6ea-897c-4133-ab1e-cefc81ab0623-kube-api-access-sgk5j\") pod \"cinder-operator-controller-manager-8d58dc466-g62fh\" (UID: \"95dfc6ea-897c-4133-ab1e-cefc81ab0623\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r999q\" (UniqueName: \"kubernetes.io/projected/0454e048-0e5f-454d-a341-627512f745b9-kube-api-access-r999q\") pod \"designate-operator-controller-manager-588d4d986b-nzs5m\" (UID: \"0454e048-0e5f-454d-a341-627512f745b9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.048420 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8wb\" (UniqueName: \"kubernetes.io/projected/98ee6d09-7d19-49ff-af63-3f24c4bbf6de-kube-api-access-rb8wb\") pod \"heat-operator-controller-manager-67dd5f86f5-j7rd5\" (UID: \"98ee6d09-7d19-49ff-af63-3f24c4bbf6de\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.050448 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-cv5dm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.054613 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.065539 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.066300 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.069066 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-p75jn" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.085600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gvh4t\" (UniqueName: \"kubernetes.io/projected/86f2c200-3fc8-4ff8-abbd-4e9196951c84-kube-api-access-gvh4t\") pod \"barbican-operator-controller-manager-59bc569d95-5lz5s\" (UID: \"86f2c200-3fc8-4ff8-abbd-4e9196951c84\") " pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.089099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r999q\" (UniqueName: \"kubernetes.io/projected/0454e048-0e5f-454d-a341-627512f745b9-kube-api-access-r999q\") pod \"designate-operator-controller-manager-588d4d986b-nzs5m\" (UID: \"0454e048-0e5f-454d-a341-627512f745b9\") " pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.097425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgk5j\" (UniqueName: \"kubernetes.io/projected/95dfc6ea-897c-4133-ab1e-cefc81ab0623-kube-api-access-sgk5j\") pod \"cinder-operator-controller-manager-8d58dc466-g62fh\" (UID: \"95dfc6ea-897c-4133-ab1e-cefc81ab0623\") " pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.102170 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.103171 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.106660 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6wwtg" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.111000 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.119556 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.132289 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.133394 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.138616 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-8g592"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.139379 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.141954 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-fb4jj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.148564 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149114 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drd4c\" (UniqueName: \"kubernetes.io/projected/86ae10c6-6dff-4cac-a399-e03bd4de7134-kube-api-access-drd4c\") pod \"keystone-operator-controller-manager-768b96df4c-9vwxq\" (UID: \"86ae10c6-6dff-4cac-a399-e03bd4de7134\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149159 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8wb\" (UniqueName: \"kubernetes.io/projected/98ee6d09-7d19-49ff-af63-3f24c4bbf6de-kube-api-access-rb8wb\") pod \"heat-operator-controller-manager-67dd5f86f5-j7rd5\" (UID: \"98ee6d09-7d19-49ff-af63-3f24c4bbf6de\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnhqc\" (UniqueName: \"kubernetes.io/projected/8035ac49-bf5e-4c7a-801a-2e0a9acdbec8-kube-api-access-lnhqc\") pod \"ironic-operator-controller-manager-6f787dddc9-cvwqk\" (UID: \"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149260 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jcbh\" (UniqueName: \"kubernetes.io/projected/fad403b0-ff16-4bfe-a0e3-8f0da431260b-kube-api-access-7jcbh\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.149302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-947qm\" (UniqueName: \"kubernetes.io/projected/d9bea0a5-4e0c-4eec-8c57-465238459ec5-kube-api-access-947qm\") pod \"glance-operator-controller-manager-79df6bcc97-4zc57\" (UID: \"d9bea0a5-4e0c-4eec-8c57-465238459ec5\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.150010 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx4t2\" (UniqueName: \"kubernetes.io/projected/ce8f650c-1729-4d5d-ae70-6cefed6ebe33-kube-api-access-dx4t2\") pod \"horizon-operator-controller-manager-8464cc45fb-jqkmw\" (UID: \"ce8f650c-1729-4d5d-ae70-6cefed6ebe33\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.156935 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-8g592"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.162662 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.163451 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.166590 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnhqc\" (UniqueName: \"kubernetes.io/projected/8035ac49-bf5e-4c7a-801a-2e0a9acdbec8-kube-api-access-lnhqc\") pod \"ironic-operator-controller-manager-6f787dddc9-cvwqk\" (UID: \"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8\") " pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.166891 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mhnst" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.167680 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-947qm\" (UniqueName: \"kubernetes.io/projected/d9bea0a5-4e0c-4eec-8c57-465238459ec5-kube-api-access-947qm\") pod \"glance-operator-controller-manager-79df6bcc97-4zc57\" (UID: \"d9bea0a5-4e0c-4eec-8c57-465238459ec5\") " pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.170268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8wb\" (UniqueName: \"kubernetes.io/projected/98ee6d09-7d19-49ff-af63-3f24c4bbf6de-kube-api-access-rb8wb\") pod \"heat-operator-controller-manager-67dd5f86f5-j7rd5\" (UID: \"98ee6d09-7d19-49ff-af63-3f24c4bbf6de\") " pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.171660 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx4t2\" (UniqueName: \"kubernetes.io/projected/ce8f650c-1729-4d5d-ae70-6cefed6ebe33-kube-api-access-dx4t2\") pod \"horizon-operator-controller-manager-8464cc45fb-jqkmw\" (UID: \"ce8f650c-1729-4d5d-ae70-6cefed6ebe33\") " pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.180473 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.181383 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.184753 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.187035 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-wpbj4" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.193884 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.201074 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.207047 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.211441 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.214418 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d2gj7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.215100 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.221281 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-58pk7"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.222346 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.225616 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7dbrt" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.229887 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.230604 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.232880 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-5frbx" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.243336 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.244861 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drd4c\" (UniqueName: \"kubernetes.io/projected/86ae10c6-6dff-4cac-a399-e03bd4de7134-kube-api-access-drd4c\") pod \"keystone-operator-controller-manager-768b96df4c-9vwxq\" (UID: \"86ae10c6-6dff-4cac-a399-e03bd4de7134\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251472 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmfmm\" (UniqueName: \"kubernetes.io/projected/527edb93-1d3a-45f7-a7c9-f9e28fb6f713-kube-api-access-zmfmm\") pod \"placement-operator-controller-manager-5784578c99-58pk7\" (UID: \"527edb93-1d3a-45f7-a7c9-f9e28fb6f713\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251494 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qh4bw\" (UniqueName: \"kubernetes.io/projected/10cd2a26-beca-4a3b-a791-83cc8cc451ab-kube-api-access-qh4bw\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251515 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251532 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwhrd\" (UniqueName: \"kubernetes.io/projected/2f2fc86c-b42c-4fd9-94e6-817ed073035d-kube-api-access-kwhrd\") pod \"neutron-operator-controller-manager-767865f676-8g592\" (UID: \"2f2fc86c-b42c-4fd9-94e6-817ed073035d\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251553 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jcbh\" (UniqueName: \"kubernetes.io/projected/fad403b0-ff16-4bfe-a0e3-8f0da431260b-kube-api-access-7jcbh\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251583 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6m2l\" (UniqueName: \"kubernetes.io/projected/9b7da04b-f73c-4838-978d-34e4665f3963-kube-api-access-v6m2l\") pod \"nova-operator-controller-manager-5d488d59fb-rdkrz\" (UID: \"9b7da04b-f73c-4838-978d-34e4665f3963\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251604 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxjm\" (UniqueName: \"kubernetes.io/projected/e85f51ac-f1e1-4299-91a6-9b27dcc50967-kube-api-access-8dxjm\") pod \"octavia-operator-controller-manager-5b9f45d989-sshvb\" (UID: \"e85f51ac-f1e1-4299-91a6-9b27dcc50967\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251620 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ld55\" (UniqueName: \"kubernetes.io/projected/84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6-kube-api-access-5ld55\") pod \"mariadb-operator-controller-manager-67ccfc9778-w497x\" (UID: \"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251637 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvqjm\" (UniqueName: \"kubernetes.io/projected/67cd41a3-e91f-4d51-b79a-61d697bbf646-kube-api-access-hvqjm\") pod \"ovn-operator-controller-manager-884679f54-pdmtp\" (UID: \"67cd41a3-e91f-4d51-b79a-61d697bbf646\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.251657 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwm8\" (UniqueName: \"kubernetes.io/projected/0688d3df-a125-4d57-9699-a87d92b140fa-kube-api-access-4pwm8\") pod \"manila-operator-controller-manager-55f864c847-wz6kw\" (UID: \"0688d3df-a125-4d57-9699-a87d92b140fa\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.251853 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.251911 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:18.751880281 +0000 UTC m=+1191.011191432 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.254326 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.262790 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-58pk7"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.280750 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.281959 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.284550 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-b994r" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.291199 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.303358 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.320169 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.321143 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drd4c\" (UniqueName: \"kubernetes.io/projected/86ae10c6-6dff-4cac-a399-e03bd4de7134-kube-api-access-drd4c\") pod \"keystone-operator-controller-manager-768b96df4c-9vwxq\" (UID: \"86ae10c6-6dff-4cac-a399-e03bd4de7134\") " pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.321255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jcbh\" (UniqueName: \"kubernetes.io/projected/fad403b0-ff16-4bfe-a0e3-8f0da431260b-kube-api-access-7jcbh\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352558 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmfmm\" (UniqueName: \"kubernetes.io/projected/527edb93-1d3a-45f7-a7c9-f9e28fb6f713-kube-api-access-zmfmm\") pod \"placement-operator-controller-manager-5784578c99-58pk7\" (UID: \"527edb93-1d3a-45f7-a7c9-f9e28fb6f713\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352615 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qh4bw\" (UniqueName: \"kubernetes.io/projected/10cd2a26-beca-4a3b-a791-83cc8cc451ab-kube-api-access-qh4bw\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwhrd\" (UniqueName: \"kubernetes.io/projected/2f2fc86c-b42c-4fd9-94e6-817ed073035d-kube-api-access-kwhrd\") pod \"neutron-operator-controller-manager-767865f676-8g592\" (UID: \"2f2fc86c-b42c-4fd9-94e6-817ed073035d\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352722 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6m2l\" (UniqueName: \"kubernetes.io/projected/9b7da04b-f73c-4838-978d-34e4665f3963-kube-api-access-v6m2l\") pod \"nova-operator-controller-manager-5d488d59fb-rdkrz\" (UID: \"9b7da04b-f73c-4838-978d-34e4665f3963\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352752 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxjm\" (UniqueName: \"kubernetes.io/projected/e85f51ac-f1e1-4299-91a6-9b27dcc50967-kube-api-access-8dxjm\") pod \"octavia-operator-controller-manager-5b9f45d989-sshvb\" (UID: \"e85f51ac-f1e1-4299-91a6-9b27dcc50967\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ld55\" (UniqueName: \"kubernetes.io/projected/84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6-kube-api-access-5ld55\") pod \"mariadb-operator-controller-manager-67ccfc9778-w497x\" (UID: \"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352799 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvqjm\" (UniqueName: \"kubernetes.io/projected/67cd41a3-e91f-4d51-b79a-61d697bbf646-kube-api-access-hvqjm\") pod \"ovn-operator-controller-manager-884679f54-pdmtp\" (UID: \"67cd41a3-e91f-4d51-b79a-61d697bbf646\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.352843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwm8\" (UniqueName: \"kubernetes.io/projected/0688d3df-a125-4d57-9699-a87d92b140fa-kube-api-access-4pwm8\") pod \"manila-operator-controller-manager-55f864c847-wz6kw\" (UID: \"0688d3df-a125-4d57-9699-a87d92b140fa\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.353852 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.353876 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr"] Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.353910 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:18.853886897 +0000 UTC m=+1191.113198048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.354756 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.363785 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.366832 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.384143 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-jf8dj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.398663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qh4bw\" (UniqueName: \"kubernetes.io/projected/10cd2a26-beca-4a3b-a791-83cc8cc451ab-kube-api-access-qh4bw\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.400344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmfmm\" (UniqueName: \"kubernetes.io/projected/527edb93-1d3a-45f7-a7c9-f9e28fb6f713-kube-api-access-zmfmm\") pod \"placement-operator-controller-manager-5784578c99-58pk7\" (UID: \"527edb93-1d3a-45f7-a7c9-f9e28fb6f713\") " pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.400789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwm8\" (UniqueName: \"kubernetes.io/projected/0688d3df-a125-4d57-9699-a87d92b140fa-kube-api-access-4pwm8\") pod \"manila-operator-controller-manager-55f864c847-wz6kw\" (UID: \"0688d3df-a125-4d57-9699-a87d92b140fa\") " pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.408297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvqjm\" (UniqueName: \"kubernetes.io/projected/67cd41a3-e91f-4d51-b79a-61d697bbf646-kube-api-access-hvqjm\") pod \"ovn-operator-controller-manager-884679f54-pdmtp\" (UID: \"67cd41a3-e91f-4d51-b79a-61d697bbf646\") " pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.455734 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lx8\" (UniqueName: \"kubernetes.io/projected/489b4c0d-9288-4e00-84ac-23fb05767840-kube-api-access-k7lx8\") pod \"telemetry-operator-controller-manager-d6b694c5-qwtfr\" (UID: \"489b4c0d-9288-4e00-84ac-23fb05767840\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.456618 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkhnb\" (UniqueName: \"kubernetes.io/projected/8129ebe9-8537-403e-9c32-835f54b5d878-kube-api-access-kkhnb\") pod \"swift-operator-controller-manager-c674c5965-jmsnc\" (UID: \"8129ebe9-8537-403e-9c32-835f54b5d878\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.408757 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ld55\" (UniqueName: \"kubernetes.io/projected/84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6-kube-api-access-5ld55\") pod \"mariadb-operator-controller-manager-67ccfc9778-w497x\" (UID: \"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6\") " pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.458756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxjm\" (UniqueName: \"kubernetes.io/projected/e85f51ac-f1e1-4299-91a6-9b27dcc50967-kube-api-access-8dxjm\") pod \"octavia-operator-controller-manager-5b9f45d989-sshvb\" (UID: \"e85f51ac-f1e1-4299-91a6-9b27dcc50967\") " pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.459603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6m2l\" (UniqueName: \"kubernetes.io/projected/9b7da04b-f73c-4838-978d-34e4665f3963-kube-api-access-v6m2l\") pod \"nova-operator-controller-manager-5d488d59fb-rdkrz\" (UID: \"9b7da04b-f73c-4838-978d-34e4665f3963\") " pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.461503 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwhrd\" (UniqueName: \"kubernetes.io/projected/2f2fc86c-b42c-4fd9-94e6-817ed073035d-kube-api-access-kwhrd\") pod \"neutron-operator-controller-manager-767865f676-8g592\" (UID: \"2f2fc86c-b42c-4fd9-94e6-817ed073035d\") " pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.465960 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.496273 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.548321 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.549841 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.563906 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkhnb\" (UniqueName: \"kubernetes.io/projected/8129ebe9-8537-403e-9c32-835f54b5d878-kube-api-access-kkhnb\") pod \"swift-operator-controller-manager-c674c5965-jmsnc\" (UID: \"8129ebe9-8537-403e-9c32-835f54b5d878\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.564049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7lx8\" (UniqueName: \"kubernetes.io/projected/489b4c0d-9288-4e00-84ac-23fb05767840-kube-api-access-k7lx8\") pod \"telemetry-operator-controller-manager-d6b694c5-qwtfr\" (UID: \"489b4c0d-9288-4e00-84ac-23fb05767840\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.599485 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.602375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.606241 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7lx8\" (UniqueName: \"kubernetes.io/projected/489b4c0d-9288-4e00-84ac-23fb05767840-kube-api-access-k7lx8\") pod \"telemetry-operator-controller-manager-d6b694c5-qwtfr\" (UID: \"489b4c0d-9288-4e00-84ac-23fb05767840\") " pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.606311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkhnb\" (UniqueName: \"kubernetes.io/projected/8129ebe9-8537-403e-9c32-835f54b5d878-kube-api-access-kkhnb\") pod \"swift-operator-controller-manager-c674c5965-jmsnc\" (UID: \"8129ebe9-8537-403e-9c32-835f54b5d878\") " pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.619538 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.622235 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.630024 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.632435 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.637360 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-mcxc5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.635807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.647625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.664853 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.665635 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.674965 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-9669l" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.690048 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.705155 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.706315 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.711735 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cldn9" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.712355 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.713660 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.714059 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.730943 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.735649 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.736523 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.738592 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bnzr2" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.741302 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.768142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpztp\" (UniqueName: \"kubernetes.io/projected/547cee69-3d64-49aa-8e95-c19be2bb3089-kube-api-access-wpztp\") pod \"test-operator-controller-manager-5c5cb9c4d7-v4npm\" (UID: \"547cee69-3d64-49aa-8e95-c19be2bb3089\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.768207 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.768337 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscpw\" (UniqueName: \"kubernetes.io/projected/f50bceb5-4fe7-4eba-a9a2-e40f6c89583a-kube-api-access-qscpw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xp6jw\" (UID: \"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.769394 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.769449 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.769430596 +0000 UTC m=+1192.028741827 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.797680 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s"] Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869552 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869595 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh9td\" (UniqueName: \"kubernetes.io/projected/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-kube-api-access-zh9td\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpztp\" (UniqueName: \"kubernetes.io/projected/547cee69-3d64-49aa-8e95-c19be2bb3089-kube-api-access-wpztp\") pod \"test-operator-controller-manager-5c5cb9c4d7-v4npm\" (UID: \"547cee69-3d64-49aa-8e95-c19be2bb3089\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869783 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qscpw\" (UniqueName: \"kubernetes.io/projected/f50bceb5-4fe7-4eba-a9a2-e40f6c89583a-kube-api-access-qscpw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xp6jw\" (UID: \"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.869894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.870040 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.870105 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.870088591 +0000 UTC m=+1192.129399812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.870135 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6gx6\" (UniqueName: \"kubernetes.io/projected/3dcb58f9-ad42-41ad-af27-2ca462257e77-kube-api-access-w6gx6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vlngd\" (UID: \"3dcb58f9-ad42-41ad-af27-2ca462257e77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.894598 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qscpw\" (UniqueName: \"kubernetes.io/projected/f50bceb5-4fe7-4eba-a9a2-e40f6c89583a-kube-api-access-qscpw\") pod \"watcher-operator-controller-manager-6c4d75f7f9-xp6jw\" (UID: \"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a\") " pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.898622 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpztp\" (UniqueName: \"kubernetes.io/projected/547cee69-3d64-49aa-8e95-c19be2bb3089-kube-api-access-wpztp\") pod \"test-operator-controller-manager-5c5cb9c4d7-v4npm\" (UID: \"547cee69-3d64-49aa-8e95-c19be2bb3089\") " pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6gx6\" (UniqueName: \"kubernetes.io/projected/3dcb58f9-ad42-41ad-af27-2ca462257e77-kube-api-access-w6gx6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vlngd\" (UID: \"3dcb58f9-ad42-41ad-af27-2ca462257e77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971557 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.971601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh9td\" (UniqueName: \"kubernetes.io/projected/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-kube-api-access-zh9td\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.971911 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.972049 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.972112 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.472090365 +0000 UTC m=+1191.731401716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: E0320 07:09:18.972576 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:19.47255772 +0000 UTC m=+1191.731868961 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.982389 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:18 crc kubenswrapper[5136]: I0320 07:09:18.986932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.001083 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh9td\" (UniqueName: \"kubernetes.io/projected/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-kube-api-access-zh9td\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.002161 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6gx6\" (UniqueName: \"kubernetes.io/projected/3dcb58f9-ad42-41ad-af27-2ca462257e77-kube-api-access-w6gx6\") pod \"rabbitmq-cluster-operator-manager-668c99d594-vlngd\" (UID: \"3dcb58f9-ad42-41ad-af27-2ca462257e77\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.021620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" event={"ID":"95dfc6ea-897c-4133-ab1e-cefc81ab0623","Type":"ContainerStarted","Data":"6d52c7ec9e589fb6228568a2afbe2ccd7a67000a56c9b462d764d82a456bd86f"} Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.022555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" event={"ID":"86f2c200-3fc8-4ff8-abbd-4e9196951c84","Type":"ContainerStarted","Data":"f46dd0bab3311676d732b89038e462558d3bddc571f95fe1b7f8e1efc179db88"} Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.036741 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd9bea0a5_4e0c_4eec_8c57_465238459ec5.slice/crio-ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c WatchSource:0}: Error finding container ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c: Status 404 returned error can't find the container with id ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.069212 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.113311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.173492 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.212127 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.218365 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.218557 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce8f650c_1729_4d5d_ae70_6cefed6ebe33.slice/crio-d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55 WatchSource:0}: Error finding container d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55: Status 404 returned error can't find the container with id d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55 Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.228077 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0454e048_0e5f_454d_a341_627512f745b9.slice/crio-ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99 WatchSource:0}: Error finding container ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99: Status 404 returned error can't find the container with id ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99 Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.229399 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98ee6d09_7d19_49ff_af63_3f24c4bbf6de.slice/crio-36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a WatchSource:0}: Error finding container 36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a: Status 404 returned error can't find the container with id 36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.340295 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.345182 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.348846 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86ae10c6_6dff_4cac_a399_e03bd4de7134.slice/crio-c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f WatchSource:0}: Error finding container c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f: Status 404 returned error can't find the container with id c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.483521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.483586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484022 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484085 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:20.48406829 +0000 UTC m=+1192.743379441 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484653 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.484698 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:20.484683839 +0000 UTC m=+1192.743994990 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.528782 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.544094 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.549176 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0688d3df_a125_4d57_9699_a87d92b140fa.slice/crio-b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c WatchSource:0}: Error finding container b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c: Status 404 returned error can't find the container with id b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.553997 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.554608 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7da04b_f73c_4838_978d_34e4665f3963.slice/crio-73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e WatchSource:0}: Error finding container 73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e: Status 404 returned error can't find the container with id 73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.561261 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.564177 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8129ebe9_8537_403e_9c32_835f54b5d878.slice/crio-25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3 WatchSource:0}: Error finding container 25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3: Status 404 returned error can't find the container with id 25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3 Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.571506 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.650293 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.659740 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5784578c99-58pk7"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.666181 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-767865f676-8g592"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.671838 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp"] Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.685529 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zmfmm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5784578c99-58pk7_openstack-operators(527edb93-1d3a-45f7-a7c9-f9e28fb6f713): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.686227 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hvqjm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-884679f54-pdmtp_openstack-operators(67cd41a3-e91f-4d51-b79a-61d697bbf646): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.686285 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwhrd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-767865f676-8g592_openstack-operators(2f2fc86c-b42c-4fd9-94e6-817ed073035d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.686934 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podUID="527edb93-1d3a-45f7-a7c9-f9e28fb6f713" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.687473 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podUID="2f2fc86c-b42c-4fd9-94e6-817ed073035d" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.687530 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podUID="67cd41a3-e91f-4d51-b79a-61d697bbf646" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.782559 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.787709 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd"] Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.791200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.791396 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.791446 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:21.791431497 +0000 UTC m=+1194.050742648 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.794983 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod547cee69_3d64_49aa_8e95_c19be2bb3089.slice/crio-7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f WatchSource:0}: Error finding container 7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f: Status 404 returned error can't find the container with id 7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.798235 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcb58f9_ad42_41ad_af27_2ca462257e77.slice/crio-784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312 WatchSource:0}: Error finding container 784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312: Status 404 returned error can't find the container with id 784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312 Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.799542 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wpztp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5c5cb9c4d7-v4npm_openstack-operators(547cee69-3d64-49aa-8e95-c19be2bb3089): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.800743 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podUID="547cee69-3d64-49aa-8e95-c19be2bb3089" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.800915 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w6gx6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-vlngd_openstack-operators(3dcb58f9-ad42-41ad-af27-2ca462257e77): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.802085 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podUID="3dcb58f9-ad42-41ad-af27-2ca462257e77" Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.825430 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw"] Mar 20 07:09:19 crc kubenswrapper[5136]: W0320 07:09:19.837094 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf50bceb5_4fe7_4eba_a9a2_e40f6c89583a.slice/crio-9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918 WatchSource:0}: Error finding container 9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918: Status 404 returned error can't find the container with id 9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918 Mar 20 07:09:19 crc kubenswrapper[5136]: I0320 07:09:19.892345 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.892535 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:19 crc kubenswrapper[5136]: E0320 07:09:19.892623 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:21.892605346 +0000 UTC m=+1194.151916497 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.039475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" event={"ID":"9b7da04b-f73c-4838-978d-34e4665f3963","Type":"ContainerStarted","Data":"73e8c644d4502c60cd2d2fd70f7407683cec7107da3f23d3e0be01e0c5e0e97e"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.040796 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" event={"ID":"ce8f650c-1729-4d5d-ae70-6cefed6ebe33","Type":"ContainerStarted","Data":"d07dd65bf01654bd51264cc43e03760de9872f8704636afcb5ce65dfa70a9b55"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.041899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" event={"ID":"0688d3df-a125-4d57-9699-a87d92b140fa","Type":"ContainerStarted","Data":"b22f81c516d49e0d340aa17e982307efaf108e8a90bae63a08ef8acfe23ef40c"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.047568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" event={"ID":"98ee6d09-7d19-49ff-af63-3f24c4bbf6de","Type":"ContainerStarted","Data":"36225ad1e966f3fc44ea4633cc0a0962356caf33a7552e98f1bd7429b7f6ce7a"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.056293 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" event={"ID":"8129ebe9-8537-403e-9c32-835f54b5d878","Type":"ContainerStarted","Data":"25ac1d1306ae55263e964b61d8363f6d59ea0ec325325ae78ed8235c29a8d8e3"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.058643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" event={"ID":"0454e048-0e5f-454d-a341-627512f745b9","Type":"ContainerStarted","Data":"ef8bb80a1c6820998977168380a58092ab5a37b3ad941e68cecae9635e482f99"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.059688 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" event={"ID":"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a","Type":"ContainerStarted","Data":"9f3d36e401e6b3772631ef82890e5763e9271c094dd0958793705c7fb73d1918"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.070095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" event={"ID":"86ae10c6-6dff-4cac-a399-e03bd4de7134","Type":"ContainerStarted","Data":"c4d293a6d66c93fe0c3bb7c1289850f0445f37440ca7997f2c529a8f6456572f"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.077917 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" event={"ID":"547cee69-3d64-49aa-8e95-c19be2bb3089","Type":"ContainerStarted","Data":"7aa523b629b8a63c8ed8a40bc8168e2a82b7e143c97893f260f8bcdb6413a45f"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.079623 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podUID="547cee69-3d64-49aa-8e95-c19be2bb3089" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.087174 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" event={"ID":"d9bea0a5-4e0c-4eec-8c57-465238459ec5","Type":"ContainerStarted","Data":"ab8515f21f6116d7dd6124ffbeb3219a34d9e5e25e5e02a1c9806e2a0d70141c"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.108622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" event={"ID":"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8","Type":"ContainerStarted","Data":"1de07cce7bd6f56ba57d6a22bd7ec9200c1b500006113e31e76041a1b80633ea"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.124045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" event={"ID":"489b4c0d-9288-4e00-84ac-23fb05767840","Type":"ContainerStarted","Data":"e596847afc4a0bb34dac621dbd0fd51d833084f6850522532affbdac0f50afd1"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.125501 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" event={"ID":"2f2fc86c-b42c-4fd9-94e6-817ed073035d","Type":"ContainerStarted","Data":"69a4f84916dee0993277b689b1450283bb636bf87cd6ef179042c0741f1d9dfe"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.127153 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podUID="2f2fc86c-b42c-4fd9-94e6-817ed073035d" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.128393 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" event={"ID":"3dcb58f9-ad42-41ad-af27-2ca462257e77","Type":"ContainerStarted","Data":"784502a1365b3491c92996e59008e0ae2b5ad83461199e657972bc723b7b6312"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.129042 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podUID="3dcb58f9-ad42-41ad-af27-2ca462257e77" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.129924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" event={"ID":"527edb93-1d3a-45f7-a7c9-f9e28fb6f713","Type":"ContainerStarted","Data":"d1fbfd7b424fa4253dc3a511a6db7e171f8b1c0d710bf2f8760d06b44636bd6d"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.130964 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podUID="527edb93-1d3a-45f7-a7c9-f9e28fb6f713" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.131350 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" event={"ID":"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6","Type":"ContainerStarted","Data":"8ab2bfe997556781d511317ef3f855d329a1b5c1f3b6c7164c6129c5870230ff"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.133066 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" event={"ID":"67cd41a3-e91f-4d51-b79a-61d697bbf646","Type":"ContainerStarted","Data":"e18dc0069f153c8d66784f7c817fe581a8452467910123083a46b67fef56dbdf"} Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.134233 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podUID="67cd41a3-e91f-4d51-b79a-61d697bbf646" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.136133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" event={"ID":"e85f51ac-f1e1-4299-91a6-9b27dcc50967","Type":"ContainerStarted","Data":"54b23b25710de9bef8e5102ed9ee2c52b2dcfdeabcb64dcb89e1d1734e62f220"} Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.503881 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:20 crc kubenswrapper[5136]: I0320 07:09:20.503935 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505181 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505228 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:22.505214276 +0000 UTC m=+1194.764525427 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505543 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:20 crc kubenswrapper[5136]: E0320 07:09:20.505570 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:22.505563535 +0000 UTC m=+1194.764874686 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149180 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podUID="3dcb58f9-ad42-41ad-af27-2ca462257e77" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149228 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:c8743a6661d118b0e5ba3eb110643358a8a3237dc75984a8f9829880b55a1622\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podUID="527edb93-1d3a-45f7-a7c9-f9e28fb6f713" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149499 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:43bd420bc05b4789243740bc75f61e10c7aac7883fc2f82b2d4d50085bc96c42\\\"\"" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podUID="547cee69-3d64-49aa-8e95-c19be2bb3089" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149552 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:bef93f71d3b42a72d8b96c69bdb4db4b8bd797c5093a0a719443d7a5c9aaab55\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podUID="67cd41a3-e91f-4d51-b79a-61d697bbf646" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.149594 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:526f9d4965431e1a5e4f8c3224bcee3f636a3108a5e0767296a994c2a517404a\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podUID="2f2fc86c-b42c-4fd9-94e6-817ed073035d" Mar 20 07:09:21 crc kubenswrapper[5136]: I0320 07:09:21.826533 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.826732 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.826801 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:25.826781955 +0000 UTC m=+1198.086093106 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: I0320 07:09:21.927474 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.927655 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:21 crc kubenswrapper[5136]: E0320 07:09:21.927753 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:25.927738499 +0000 UTC m=+1198.187049730 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: I0320 07:09:22.535375 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:22 crc kubenswrapper[5136]: I0320 07:09:22.536297 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536036 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536419 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:26.536399498 +0000 UTC m=+1198.795710649 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536442 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:22 crc kubenswrapper[5136]: E0320 07:09:22.536494 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:26.53647727 +0000 UTC m=+1198.795788421 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: I0320 07:09:25.879940 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.880139 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.880424 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:33.880407446 +0000 UTC m=+1206.139718597 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: I0320 07:09:25.981849 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.982038 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:25 crc kubenswrapper[5136]: E0320 07:09:25.982126 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:33.982106622 +0000 UTC m=+1206.241417773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: I0320 07:09:26.600559 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:26 crc kubenswrapper[5136]: I0320 07:09:26.600637 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600757 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600791 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600828 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:34.600795445 +0000 UTC m=+1206.860106596 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:26 crc kubenswrapper[5136]: E0320 07:09:26.600845 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:34.600835576 +0000 UTC m=+1206.860146727 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.243544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" event={"ID":"95dfc6ea-897c-4133-ab1e-cefc81ab0623","Type":"ContainerStarted","Data":"7e7cb354942cada64e9d21bbf6a43976fbbd08570a40df72e9e9e4016460cc0c"} Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.244127 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.268264 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" podStartSLOduration=2.285815806 podStartE2EDuration="16.268244697s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:18.839055189 +0000 UTC m=+1191.098366340" lastFinishedPulling="2026-03-20 07:09:32.82148408 +0000 UTC m=+1205.080795231" observedRunningTime="2026-03-20 07:09:33.263734839 +0000 UTC m=+1205.523045990" watchObservedRunningTime="2026-03-20 07:09:33.268244697 +0000 UTC m=+1205.527555848" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.271395 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" event={"ID":"84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6","Type":"ContainerStarted","Data":"a6a388a64ae4dc140d91d31ee9ed9a06b37ae39f77a009e66dea771076c17470"} Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.271450 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.275203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" event={"ID":"0454e048-0e5f-454d-a341-627512f745b9","Type":"ContainerStarted","Data":"063f1a6c6d535af49bc5857c8228402ce866d32f2fe6945410ffe65a1a440302"} Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.275611 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.290952 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" podStartSLOduration=3.025117728 podStartE2EDuration="16.290933475s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.554928471 +0000 UTC m=+1191.814239622" lastFinishedPulling="2026-03-20 07:09:32.820744218 +0000 UTC m=+1205.080055369" observedRunningTime="2026-03-20 07:09:33.286336245 +0000 UTC m=+1205.545647416" watchObservedRunningTime="2026-03-20 07:09:33.290933475 +0000 UTC m=+1205.550244626" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.308535 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" podStartSLOduration=2.719308828 podStartE2EDuration="16.308518708s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.231648471 +0000 UTC m=+1191.490959622" lastFinishedPulling="2026-03-20 07:09:32.820858351 +0000 UTC m=+1205.080169502" observedRunningTime="2026-03-20 07:09:33.305224138 +0000 UTC m=+1205.564535299" watchObservedRunningTime="2026-03-20 07:09:33.308518708 +0000 UTC m=+1205.567829859" Mar 20 07:09:33 crc kubenswrapper[5136]: I0320 07:09:33.915317 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:33 crc kubenswrapper[5136]: E0320 07:09:33.915547 5136 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:33 crc kubenswrapper[5136]: E0320 07:09:33.915636 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert podName:fad403b0-ff16-4bfe-a0e3-8f0da431260b nodeName:}" failed. No retries permitted until 2026-03-20 07:09:49.915615029 +0000 UTC m=+1222.174926230 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert") pod "infra-operator-controller-manager-7b9c774f96-rpqlj" (UID: "fad403b0-ff16-4bfe-a0e3-8f0da431260b") : secret "infra-operator-webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.020862 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.021044 5136 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.021123 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert podName:10cd2a26-beca-4a3b-a791-83cc8cc451ab nodeName:}" failed. No retries permitted until 2026-03-20 07:09:50.021101161 +0000 UTC m=+1222.280412312 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert") pod "openstack-baremetal-operator-controller-manager-74c4796899556xf" (UID: "10cd2a26-beca-4a3b-a791-83cc8cc451ab") : secret "openstack-baremetal-operator-webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.292382 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" event={"ID":"86ae10c6-6dff-4cac-a399-e03bd4de7134","Type":"ContainerStarted","Data":"842c003c7f7dc315dd86d99ba05f17e78b2dde6ec9264b74ab4d30b3f5643304"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.292753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.298404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" event={"ID":"98ee6d09-7d19-49ff-af63-3f24c4bbf6de","Type":"ContainerStarted","Data":"0775020ccbb5ad709406226ca37fc3cbe4d6f1d213a64c9c483d9e66951e4ce1"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.298493 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.299919 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" event={"ID":"8129ebe9-8537-403e-9c32-835f54b5d878","Type":"ContainerStarted","Data":"3375eef257c6e9c649cc18879ad127be29f46187af3180adb95fb0f7e23faf5f"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.300249 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.303523 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" event={"ID":"ce8f650c-1729-4d5d-ae70-6cefed6ebe33","Type":"ContainerStarted","Data":"c9227d9cc0f326a24aa8a02f9e68c0ceccbe64bf2944b0107c9f7ef44f8f0e24"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.303671 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.311612 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" event={"ID":"d9bea0a5-4e0c-4eec-8c57-465238459ec5","Type":"ContainerStarted","Data":"2436205d12b77089f4817d2475f2a18f2f74134b7ef87932421e011b548c9233"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.311737 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.316408 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" podStartSLOduration=3.845727647 podStartE2EDuration="17.3163912s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.350688533 +0000 UTC m=+1191.609999684" lastFinishedPulling="2026-03-20 07:09:32.821352086 +0000 UTC m=+1205.080663237" observedRunningTime="2026-03-20 07:09:34.311794821 +0000 UTC m=+1206.571105972" watchObservedRunningTime="2026-03-20 07:09:34.3163912 +0000 UTC m=+1206.575702351" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.324964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" event={"ID":"8035ac49-bf5e-4c7a-801a-2e0a9acdbec8","Type":"ContainerStarted","Data":"cd04d3f98382b33d52bbf480d7d633e36646d8e359cb3c02ddd1a559019b8567"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.325076 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.334039 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" event={"ID":"0688d3df-a125-4d57-9699-a87d92b140fa","Type":"ContainerStarted","Data":"53b4d7f8b56ae02327813017827bb357ec04040f06a42e0ec88eb67a6a44acf7"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.334183 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.337635 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" podStartSLOduration=3.53706048 podStartE2EDuration="17.337618564s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.060972112 +0000 UTC m=+1191.320283263" lastFinishedPulling="2026-03-20 07:09:32.861530196 +0000 UTC m=+1205.120841347" observedRunningTime="2026-03-20 07:09:34.332225841 +0000 UTC m=+1206.591536992" watchObservedRunningTime="2026-03-20 07:09:34.337618564 +0000 UTC m=+1206.596929715" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.338009 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" event={"ID":"86f2c200-3fc8-4ff8-abbd-4e9196951c84","Type":"ContainerStarted","Data":"c814c7282ac1caf858e6667c6be6d5178233fb8bd64b230ef67ffabfe77c1149"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.338150 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.342707 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" event={"ID":"9b7da04b-f73c-4838-978d-34e4665f3963","Type":"ContainerStarted","Data":"a8fc3251644dfe3907082bf6f4dfa2b2b6a45c56ce03d184ebadb69d09515a34"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.342766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.351042 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" event={"ID":"e85f51ac-f1e1-4299-91a6-9b27dcc50967","Type":"ContainerStarted","Data":"621eb64767f303013aaf2fcb2f1b552cdc1de0ef6b369126ad639ee163c6da8c"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.351172 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.356239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" event={"ID":"489b4c0d-9288-4e00-84ac-23fb05767840","Type":"ContainerStarted","Data":"59ffce7312913677bb45c8b599d43c49d6caf11830634af95385c1f867b164ec"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.356368 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.362881 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" event={"ID":"f50bceb5-4fe7-4eba-a9a2-e40f6c89583a","Type":"ContainerStarted","Data":"b548c05a98e376989922a707ba3ff580ddacc5bb99dd7352a13cdd8b7a9fb831"} Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.362912 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.366574 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" podStartSLOduration=3.116876955 podStartE2EDuration="16.366565022s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.571739851 +0000 UTC m=+1191.831051002" lastFinishedPulling="2026-03-20 07:09:32.821427918 +0000 UTC m=+1205.080739069" observedRunningTime="2026-03-20 07:09:34.363708135 +0000 UTC m=+1206.623019286" watchObservedRunningTime="2026-03-20 07:09:34.366565022 +0000 UTC m=+1206.625876163" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.388506 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" podStartSLOduration=3.757408387 podStartE2EDuration="17.388490937s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.232073144 +0000 UTC m=+1191.491384295" lastFinishedPulling="2026-03-20 07:09:32.863155694 +0000 UTC m=+1205.122466845" observedRunningTime="2026-03-20 07:09:34.383338541 +0000 UTC m=+1206.642649692" watchObservedRunningTime="2026-03-20 07:09:34.388490937 +0000 UTC m=+1206.647802088" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.406530 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" podStartSLOduration=3.8140552850000002 podStartE2EDuration="17.406515374s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.228862306 +0000 UTC m=+1191.488173457" lastFinishedPulling="2026-03-20 07:09:32.821322395 +0000 UTC m=+1205.080633546" observedRunningTime="2026-03-20 07:09:34.405800173 +0000 UTC m=+1206.665111324" watchObservedRunningTime="2026-03-20 07:09:34.406515374 +0000 UTC m=+1206.665826525" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.432319 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" podStartSLOduration=3.133036005 podStartE2EDuration="16.432304477s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.561005085 +0000 UTC m=+1191.820316236" lastFinishedPulling="2026-03-20 07:09:32.860273557 +0000 UTC m=+1205.119584708" observedRunningTime="2026-03-20 07:09:34.430431361 +0000 UTC m=+1206.689742512" watchObservedRunningTime="2026-03-20 07:09:34.432304477 +0000 UTC m=+1206.691615628" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.459171 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" podStartSLOduration=3.506368069 podStartE2EDuration="17.459150291s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:18.868652437 +0000 UTC m=+1191.127963588" lastFinishedPulling="2026-03-20 07:09:32.821434659 +0000 UTC m=+1205.080745810" observedRunningTime="2026-03-20 07:09:34.452561832 +0000 UTC m=+1206.711872983" watchObservedRunningTime="2026-03-20 07:09:34.459150291 +0000 UTC m=+1206.718461442" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.517527 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" podStartSLOduration=4.003489263 podStartE2EDuration="17.517506452s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.348375723 +0000 UTC m=+1191.607686864" lastFinishedPulling="2026-03-20 07:09:32.862392902 +0000 UTC m=+1205.121704053" observedRunningTime="2026-03-20 07:09:34.477624293 +0000 UTC m=+1206.736935444" watchObservedRunningTime="2026-03-20 07:09:34.517506452 +0000 UTC m=+1206.776817603" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.529540 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" podStartSLOduration=4.184925729 podStartE2EDuration="17.529522947s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.556671453 +0000 UTC m=+1191.815982604" lastFinishedPulling="2026-03-20 07:09:32.901268671 +0000 UTC m=+1205.160579822" observedRunningTime="2026-03-20 07:09:34.524472344 +0000 UTC m=+1206.783783505" watchObservedRunningTime="2026-03-20 07:09:34.529522947 +0000 UTC m=+1206.788834098" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.589177 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" podStartSLOduration=3.401532662 podStartE2EDuration="16.589159557s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.671549029 +0000 UTC m=+1191.930860180" lastFinishedPulling="2026-03-20 07:09:32.859175924 +0000 UTC m=+1205.118487075" observedRunningTime="2026-03-20 07:09:34.588702353 +0000 UTC m=+1206.848013504" watchObservedRunningTime="2026-03-20 07:09:34.589159557 +0000 UTC m=+1206.848470708" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.589439 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" podStartSLOduration=4.2809341530000005 podStartE2EDuration="17.589435945s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.554682923 +0000 UTC m=+1191.813994074" lastFinishedPulling="2026-03-20 07:09:32.863184715 +0000 UTC m=+1205.122495866" observedRunningTime="2026-03-20 07:09:34.54412894 +0000 UTC m=+1206.803440091" watchObservedRunningTime="2026-03-20 07:09:34.589435945 +0000 UTC m=+1206.848747096" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.630914 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" podStartSLOduration=3.648632509 podStartE2EDuration="16.630894053s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.839149264 +0000 UTC m=+1192.098460415" lastFinishedPulling="2026-03-20 07:09:32.821410808 +0000 UTC m=+1205.080721959" observedRunningTime="2026-03-20 07:09:34.630418738 +0000 UTC m=+1206.889729889" watchObservedRunningTime="2026-03-20 07:09:34.630894053 +0000 UTC m=+1206.890205204" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.631090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:34 crc kubenswrapper[5136]: I0320 07:09:34.631156 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631289 5136 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631374 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:50.631355517 +0000 UTC m=+1222.890666668 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "webhook-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631310 5136 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Mar 20 07:09:34 crc kubenswrapper[5136]: E0320 07:09:34.631434 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs podName:9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8 nodeName:}" failed. No retries permitted until 2026-03-20 07:09:50.631416929 +0000 UTC m=+1222.890728080 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs") pod "openstack-operator-controller-manager-86bd8996f6-5rlp5" (UID: "9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8") : secret "metrics-server-cert" not found Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.392330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" event={"ID":"67cd41a3-e91f-4d51-b79a-61d697bbf646","Type":"ContainerStarted","Data":"da5cf7017c8f5d726fc27efce7259295460743dcd772f95d8370fb0073663974"} Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.395045 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.395351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" event={"ID":"527edb93-1d3a-45f7-a7c9-f9e28fb6f713","Type":"ContainerStarted","Data":"e6f489cd961c6ab5369412ab0ada6d459daf4ad2149b00b608db522b4f2f6027"} Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.395595 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.410609 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" podStartSLOduration=2.35344406 podStartE2EDuration="19.410593768s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.6860866 +0000 UTC m=+1191.945397751" lastFinishedPulling="2026-03-20 07:09:36.743236308 +0000 UTC m=+1209.002547459" observedRunningTime="2026-03-20 07:09:37.40768123 +0000 UTC m=+1209.666992381" watchObservedRunningTime="2026-03-20 07:09:37.410593768 +0000 UTC m=+1209.669904919" Mar 20 07:09:37 crc kubenswrapper[5136]: I0320 07:09:37.423175 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" podStartSLOduration=2.358590547 podStartE2EDuration="19.4231544s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.685293506 +0000 UTC m=+1191.944604657" lastFinishedPulling="2026-03-20 07:09:36.749857359 +0000 UTC m=+1209.009168510" observedRunningTime="2026-03-20 07:09:37.418486648 +0000 UTC m=+1209.677797789" watchObservedRunningTime="2026-03-20 07:09:37.4231544 +0000 UTC m=+1209.682465551" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.114171 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-59bc569d95-5lz5s" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.138706 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d58dc466-g62fh" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.151377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-588d4d986b-nzs5m" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.206026 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-79df6bcc97-4zc57" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.249207 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-67dd5f86f5-j7rd5" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.294571 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-8464cc45fb-jqkmw" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.323712 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6f787dddc9-cvwqk" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.369190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-768b96df4c-9vwxq" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.469632 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-55f864c847-wz6kw" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.519858 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-5d488d59fb-rdkrz" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.551340 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5b9f45d989-sshvb" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.553761 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67ccfc9778-w497x" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.623722 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-c674c5965-jmsnc" Mar 20 07:09:38 crc kubenswrapper[5136]: I0320 07:09:38.646711 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-d6b694c5-qwtfr" Mar 20 07:09:39 crc kubenswrapper[5136]: I0320 07:09:39.071313 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6c4d75f7f9-xp6jw" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.439160 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" event={"ID":"3dcb58f9-ad42-41ad-af27-2ca462257e77","Type":"ContainerStarted","Data":"0d0fe534f35939034613c97846211d407cd8c5249c5e40e9993476f79ecceb60"} Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.442038 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" event={"ID":"547cee69-3d64-49aa-8e95-c19be2bb3089","Type":"ContainerStarted","Data":"8cbca177b4dcb874725fd915a9e4e4ec29b7155efc98a2fc4feb5ba129fdf1ca"} Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.442226 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.443693 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" event={"ID":"2f2fc86c-b42c-4fd9-94e6-817ed073035d","Type":"ContainerStarted","Data":"d472ff1e308210958e334cd25c2c28645dee8cabbd1f241342f9646b495b942f"} Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.443850 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.453590 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-vlngd" podStartSLOduration=2.275244848 podStartE2EDuration="22.453569532s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.799684798 +0000 UTC m=+1192.058995949" lastFinishedPulling="2026-03-20 07:09:39.978009482 +0000 UTC m=+1212.237320633" observedRunningTime="2026-03-20 07:09:40.451782788 +0000 UTC m=+1212.711093949" watchObservedRunningTime="2026-03-20 07:09:40.453569532 +0000 UTC m=+1212.712880683" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.467137 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" podStartSLOduration=2.304309569 podStartE2EDuration="22.467110602s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.799398999 +0000 UTC m=+1192.058710150" lastFinishedPulling="2026-03-20 07:09:39.962200032 +0000 UTC m=+1212.221511183" observedRunningTime="2026-03-20 07:09:40.466001919 +0000 UTC m=+1212.725313110" watchObservedRunningTime="2026-03-20 07:09:40.467110602 +0000 UTC m=+1212.726421753" Mar 20 07:09:40 crc kubenswrapper[5136]: I0320 07:09:40.494299 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" podStartSLOduration=3.218241988 podStartE2EDuration="23.494270597s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:19.686169323 +0000 UTC m=+1191.945480484" lastFinishedPulling="2026-03-20 07:09:39.962197942 +0000 UTC m=+1212.221509093" observedRunningTime="2026-03-20 07:09:40.482283573 +0000 UTC m=+1212.741594744" watchObservedRunningTime="2026-03-20 07:09:40.494270597 +0000 UTC m=+1212.753581748" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.603345 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-884679f54-pdmtp" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.868358 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5784578c99-58pk7" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.869690 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-767865f676-8g592" Mar 20 07:09:48 crc kubenswrapper[5136]: I0320 07:09:48.986365 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5c5cb9c4d7-v4npm" Mar 20 07:09:49 crc kubenswrapper[5136]: I0320 07:09:49.979191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:49 crc kubenswrapper[5136]: I0320 07:09:49.986119 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/fad403b0-ff16-4bfe-a0e3-8f0da431260b-cert\") pod \"infra-operator-controller-manager-7b9c774f96-rpqlj\" (UID: \"fad403b0-ff16-4bfe-a0e3-8f0da431260b\") " pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.081763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.086278 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/10cd2a26-beca-4a3b-a791-83cc8cc451ab-cert\") pod \"openstack-baremetal-operator-controller-manager-74c4796899556xf\" (UID: \"10cd2a26-beca-4a3b-a791-83cc8cc451ab\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.139130 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7crl6" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.148495 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.336989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj"] Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.364499 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-d2gj7" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.373439 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.527246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" event={"ID":"fad403b0-ff16-4bfe-a0e3-8f0da431260b","Type":"ContainerStarted","Data":"c45f57dfd4dbf4cc3c6c85e29d0ca8e3f12c8b75411bfc314fb8af947c6e4f1d"} Mar 20 07:09:50 crc kubenswrapper[5136]: W0320 07:09:50.659936 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10cd2a26_beca_4a3b_a791_83cc8cc451ab.slice/crio-2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0 WatchSource:0}: Error finding container 2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0: Status 404 returned error can't find the container with id 2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0 Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.661845 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf"] Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.688960 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.689094 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.693234 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-metrics-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.693368 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8-webhook-certs\") pod \"openstack-operator-controller-manager-86bd8996f6-5rlp5\" (UID: \"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8\") " pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.892330 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-cldn9" Mar 20 07:09:50 crc kubenswrapper[5136]: I0320 07:09:50.901045 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:51 crc kubenswrapper[5136]: I0320 07:09:51.381416 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5"] Mar 20 07:09:51 crc kubenswrapper[5136]: I0320 07:09:51.535257 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" event={"ID":"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8","Type":"ContainerStarted","Data":"456a80b026269f2ff0688c96d61d97f3845ab6551718078d63b92ce2085414c2"} Mar 20 07:09:51 crc kubenswrapper[5136]: I0320 07:09:51.536458 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" event={"ID":"10cd2a26-beca-4a3b-a791-83cc8cc451ab","Type":"ContainerStarted","Data":"2632c2dcdcd518ca8df75fe73147b96d28212e15f6327025e7ba8eb0aaf38cc0"} Mar 20 07:09:56 crc kubenswrapper[5136]: I0320 07:09:56.580487 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" event={"ID":"9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8","Type":"ContainerStarted","Data":"a34ff85e4fc0d37e3fa38dc38f2b4e08264225bc05b72ba2ed359276601fc32c"} Mar 20 07:09:56 crc kubenswrapper[5136]: I0320 07:09:56.581200 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:09:56 crc kubenswrapper[5136]: I0320 07:09:56.615519 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" podStartSLOduration=38.615500197 podStartE2EDuration="38.615500197s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:09:56.606244875 +0000 UTC m=+1228.865556026" watchObservedRunningTime="2026-03-20 07:09:56.615500197 +0000 UTC m=+1228.874811338" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.595118 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" event={"ID":"fad403b0-ff16-4bfe-a0e3-8f0da431260b","Type":"ContainerStarted","Data":"d2777db4006c3f3e7ce92a899ea2151eedfb3124b820af30e500519dbf78c310"} Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.595975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.596729 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" event={"ID":"10cd2a26-beca-4a3b-a791-83cc8cc451ab","Type":"ContainerStarted","Data":"009dd88a073a3ca0707eb2178f3e0cf30b01ff906278d4101646069783bf33eb"} Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.597186 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.628130 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" podStartSLOduration=33.903403674 podStartE2EDuration="41.628111436s" podCreationTimestamp="2026-03-20 07:09:17 +0000 UTC" firstStartedPulling="2026-03-20 07:09:50.347988271 +0000 UTC m=+1222.607299412" lastFinishedPulling="2026-03-20 07:09:58.072696013 +0000 UTC m=+1230.332007174" observedRunningTime="2026-03-20 07:09:58.6249683 +0000 UTC m=+1230.884279451" watchObservedRunningTime="2026-03-20 07:09:58.628111436 +0000 UTC m=+1230.887422587" Mar 20 07:09:58 crc kubenswrapper[5136]: I0320 07:09:58.660692 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" podStartSLOduration=33.254252351 podStartE2EDuration="40.660673304s" podCreationTimestamp="2026-03-20 07:09:18 +0000 UTC" firstStartedPulling="2026-03-20 07:09:50.662210626 +0000 UTC m=+1222.921521777" lastFinishedPulling="2026-03-20 07:09:58.068631579 +0000 UTC m=+1230.327942730" observedRunningTime="2026-03-20 07:09:58.6585868 +0000 UTC m=+1230.917897951" watchObservedRunningTime="2026-03-20 07:09:58.660673304 +0000 UTC m=+1230.919984455" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.141210 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.142510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.145152 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.147105 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.147442 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.192188 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.253505 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"auto-csr-approver-29566510-bn9cf\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.355240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"auto-csr-approver-29566510-bn9cf\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.372363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"auto-csr-approver-29566510-bn9cf\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.512987 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:00 crc kubenswrapper[5136]: I0320 07:10:00.967052 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:10:00 crc kubenswrapper[5136]: W0320 07:10:00.972116 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17242c2e_8526_49cf_89dd_e35bd97c6626.slice/crio-d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3 WatchSource:0}: Error finding container d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3: Status 404 returned error can't find the container with id d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3 Mar 20 07:10:01 crc kubenswrapper[5136]: I0320 07:10:01.638405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" event={"ID":"17242c2e-8526-49cf-89dd-e35bd97c6626","Type":"ContainerStarted","Data":"d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3"} Mar 20 07:10:03 crc kubenswrapper[5136]: I0320 07:10:03.653963 5136 generic.go:334] "Generic (PLEG): container finished" podID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerID="a922963e448f67de5c7ef7e39ae9a8fe1051c4a0abe704c7b54dc25c09d90caa" exitCode=0 Mar 20 07:10:03 crc kubenswrapper[5136]: I0320 07:10:03.654033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" event={"ID":"17242c2e-8526-49cf-89dd-e35bd97c6626","Type":"ContainerDied","Data":"a922963e448f67de5c7ef7e39ae9a8fe1051c4a0abe704c7b54dc25c09d90caa"} Mar 20 07:10:04 crc kubenswrapper[5136]: I0320 07:10:04.976130 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.121953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") pod \"17242c2e-8526-49cf-89dd-e35bd97c6626\" (UID: \"17242c2e-8526-49cf-89dd-e35bd97c6626\") " Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.127525 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc" (OuterVolumeSpecName: "kube-api-access-fd4pc") pod "17242c2e-8526-49cf-89dd-e35bd97c6626" (UID: "17242c2e-8526-49cf-89dd-e35bd97c6626"). InnerVolumeSpecName "kube-api-access-fd4pc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.223262 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd4pc\" (UniqueName: \"kubernetes.io/projected/17242c2e-8526-49cf-89dd-e35bd97c6626-kube-api-access-fd4pc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.671714 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" event={"ID":"17242c2e-8526-49cf-89dd-e35bd97c6626","Type":"ContainerDied","Data":"d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3"} Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.671773 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ac28245784c95b48e9a6b1a0f16405ab5a59fb79434ce0f14bf6dd255fc5b3" Mar 20 07:10:05 crc kubenswrapper[5136]: I0320 07:10:05.671771 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566510-bn9cf" Mar 20 07:10:06 crc kubenswrapper[5136]: I0320 07:10:06.071214 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:10:06 crc kubenswrapper[5136]: I0320 07:10:06.081610 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566504-fnsrq"] Mar 20 07:10:06 crc kubenswrapper[5136]: I0320 07:10:06.410930 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e1a6ad-3e5f-4a83-b429-d132710b8146" path="/var/lib/kubelet/pods/f8e1a6ad-3e5f-4a83-b429-d132710b8146/volumes" Mar 20 07:10:10 crc kubenswrapper[5136]: I0320 07:10:10.157039 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-7b9c774f96-rpqlj" Mar 20 07:10:10 crc kubenswrapper[5136]: I0320 07:10:10.380522 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-74c4796899556xf" Mar 20 07:10:10 crc kubenswrapper[5136]: I0320 07:10:10.906430 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-86bd8996f6-5rlp5" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.927770 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:25 crc kubenswrapper[5136]: E0320 07:10:25.928447 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerName="oc" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.928458 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerName="oc" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.928636 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" containerName="oc" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.929376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931276 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931545 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931694 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fj6zh" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.931845 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 20 07:10:25 crc kubenswrapper[5136]: I0320 07:10:25.949561 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.038943 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.039957 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.042704 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.047338 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.091376 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.091471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.192779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.192943 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193198 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193305 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193379 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.193773 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.219361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"dnsmasq-dns-5448ff6dc7-qtnm5\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.252692 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.294792 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.294851 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.294895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.296073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.296792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.312484 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"dnsmasq-dns-64696987c5-86bqj\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.357775 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.735220 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.745016 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.797682 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:26 crc kubenswrapper[5136]: W0320 07:10:26.805299 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1dd8ad22_4946_4d2c_b2cb_a38f42166c88.slice/crio-0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc WatchSource:0}: Error finding container 0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc: Status 404 returned error can't find the container with id 0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.857220 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64696987c5-86bqj" event={"ID":"1dd8ad22-4946-4d2c-b2cb-a38f42166c88","Type":"ContainerStarted","Data":"0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc"} Mar 20 07:10:26 crc kubenswrapper[5136]: I0320 07:10:26.858444 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" event={"ID":"89d4f1a0-0e10-49e6-98bc-43920e03caba","Type":"ContainerStarted","Data":"b1739b4168d54cbd082e621d7a6b119af2a011d7e88735613ee365c8955655aa"} Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.227738 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.241149 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.242147 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.248988 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.421970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.422324 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.422369 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.523427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.523471 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.523501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.524582 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.525149 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.542022 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"dnsmasq-dns-854f47b4f9-rcppb\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.565704 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.845073 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.861370 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.862462 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:28 crc kubenswrapper[5136]: I0320 07:10:28.886754 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.033456 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.033526 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.033576 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.102176 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.136649 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.136711 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.136779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.137665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.138207 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.158171 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"dnsmasq-dns-54b5dffb47-zd72x\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.189939 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.334627 5136 scope.go:117] "RemoveContainer" containerID="a9c6142c6c3be406a353a6109a8cb8b7b38a7799c67785c8207003ce9a223a42" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.406805 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.412123 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414193 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414377 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414503 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-x9v8f" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.414526 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.416516 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.416736 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.416761 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.439572 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.460050 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:29 crc kubenswrapper[5136]: W0320 07:10:29.472589 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod571c2781_59c0_4345_9a04_09a51ceabc0d.slice/crio-603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd WatchSource:0}: Error finding container 603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd: Status 404 returned error can't find the container with id 603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542593 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542687 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542710 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542766 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542844 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542908 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.542999 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644476 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644550 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644583 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644683 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644725 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644747 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644775 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.644839 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.645010 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646012 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646667 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.646690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.647943 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.655705 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.667325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.667862 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.668509 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.671858 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.676233 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"rabbitmq-server-0\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.736420 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.881164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" event={"ID":"571c2781-59c0-4345-9a04-09a51ceabc0d","Type":"ContainerStarted","Data":"603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd"} Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.883077 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" event={"ID":"90e44514-0ddc-4151-ad00-cf458d5adf9e","Type":"ContainerStarted","Data":"a835263469ad6ded88770537144f67af862b9ca0e4d044183f6bbb2b8ff9cb68"} Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.991794 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.993149 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995371 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995452 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995571 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-88lhx" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.995718 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.999697 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 07:10:29 crc kubenswrapper[5136]: I0320 07:10:29.999871 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.018378 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165411 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165482 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165594 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.165722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166175 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.166237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.204150 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:10:30 crc kubenswrapper[5136]: W0320 07:10:30.217538 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261514f8_7734_423d_b15a_e83fdc2a85fd.slice/crio-3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b WatchSource:0}: Error finding container 3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b: Status 404 returned error can't find the container with id 3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268229 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268428 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268520 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268662 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268722 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268743 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.268762 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.269148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.269480 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.269667 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.270132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.270718 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.271849 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.276543 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.278897 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.289786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.292972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.293681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.304708 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.317388 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:10:30 crc kubenswrapper[5136]: I0320 07:10:30.893360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerStarted","Data":"3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b"} Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.494613 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.495789 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.497726 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.497875 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-7hd6r" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.500442 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.500725 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.506034 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.506246 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.593943 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594005 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594144 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594171 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594236 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.594293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.695638 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.695728 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.696008 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.696710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698089 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698348 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698553 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.698691 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.700323 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.710551 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.715533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.717665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.722351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"openstack-galera-0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " pod="openstack/openstack-galera-0" Mar 20 07:10:31 crc kubenswrapper[5136]: I0320 07:10:31.814632 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.962781 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.965216 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.970233 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.970749 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.971060 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-t6q78" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.971147 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 20 07:10:32 crc kubenswrapper[5136]: I0320 07:10:32.979113 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.126892 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.126956 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.126982 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127152 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127322 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.127371 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228429 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228536 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228568 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228597 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228656 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.228679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.229647 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.230597 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.239728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.246977 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.249524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.249553 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.251353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.255537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.275315 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-cell1-galera-0\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.291227 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.346106 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.347134 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.356584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-96ds2" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.356836 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.357346 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.367705 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.435539 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.435794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.436032 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.436090 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.436125 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537413 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537478 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.537611 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.538978 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.539334 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.542379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.543627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.555325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"memcached-0\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " pod="openstack/memcached-0" Mar 20 07:10:33 crc kubenswrapper[5136]: I0320 07:10:33.684310 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.318491 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.319832 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.322412 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r4xg6" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.390007 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.468965 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"kube-state-metrics-0\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.570709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"kube-state-metrics-0\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.592653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"kube-state-metrics-0\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " pod="openstack/kube-state-metrics-0" Mar 20 07:10:35 crc kubenswrapper[5136]: I0320 07:10:35.640853 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:10:37 crc kubenswrapper[5136]: I0320 07:10:37.490078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.346054 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.347735 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.351386 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.351551 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-txsj2" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.352079 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.352358 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.352711 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.358528 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.439415 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440619 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440783 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440906 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.440965 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441015 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.441113 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.443417 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.443591 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.443724 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-7cjkk" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.448215 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.458761 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.460534 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.466962 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543477 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543577 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543618 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543650 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543735 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.543787 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544563 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544611 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544638 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544663 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544686 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545126 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.544712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545341 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.545392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.552003 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.552204 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.565063 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.565329 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.566461 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.570176 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.571609 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646557 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646623 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646642 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646682 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646767 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646786 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646834 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646867 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646890 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646917 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.646937 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647329 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647618 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.647988 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.648073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.650377 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.652640 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.652796 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.653074 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.653098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.664019 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"ovn-controller-ovs-ldp4w\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.664952 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"ovn-controller-gnwt6\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.687829 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.759078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:10:39 crc kubenswrapper[5136]: I0320 07:10:39.777198 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.655347 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.659771 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.662810 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.662989 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.663141 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-7jqrk" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.664335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.666191 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785416 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785481 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785617 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785758 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785793 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785845 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785891 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.785961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888034 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888271 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888382 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888410 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888445 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888496 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.888777 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.889335 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.889497 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.890034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.896715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.896733 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.902441 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.902945 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.913694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"ovsdbserver-sb-0\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:41 crc kubenswrapper[5136]: I0320 07:10:41.980238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:10:46 crc kubenswrapper[5136]: W0320 07:10:46.048415 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod960739f0_c4a5_49c6_8e2a_9452815cf1a9.slice/crio-c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce WatchSource:0}: Error finding container c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce: Status 404 returned error can't find the container with id c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce Mar 20 07:10:46 crc kubenswrapper[5136]: I0320 07:10:46.519499 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.027441 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerStarted","Data":"22b2668fe332b62f7864af2d759b5866cf033333320267d52cb7cec04a426bd9"} Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.029840 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerStarted","Data":"c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce"} Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.411445 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: W0320 07:10:47.418456 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod23c10323_3c49_4f00_8bf7_319e6f5834d0.slice/crio-472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d WatchSource:0}: Error finding container 472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d: Status 404 returned error can't find the container with id 472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.654272 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: W0320 07:10:47.657077 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf624d46_ce35_4e7f_b463_4b0eba006ded.slice/crio-344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a WatchSource:0}: Error finding container 344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a: Status 404 returned error can't find the container with id 344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.727659 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.736510 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: E0320 07:10:47.759303 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:47 crc kubenswrapper[5136]: E0320 07:10:47.759485 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sw97g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-54b5dffb47-zd72x_openstack(571c2781-59c0-4345-9a04-09a51ceabc0d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:47 crc kubenswrapper[5136]: E0320 07:10:47.760645 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" podUID="571c2781-59c0-4345-9a04-09a51ceabc0d" Mar 20 07:10:47 crc kubenswrapper[5136]: I0320 07:10:47.959116 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:10:47 crc kubenswrapper[5136]: W0320 07:10:47.967618 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf872c575_a357_4b29_b5e8_cf5dbe6f3d7a.slice/crio-58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6 WatchSource:0}: Error finding container 58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6: Status 404 returned error can't find the container with id 58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6 Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.038132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerStarted","Data":"344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.039395 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerStarted","Data":"8182f12d4de26ad384abd8e2a3a9007acaaad7cd8b7e832cca1481d0c6ef89ef"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.040116 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerStarted","Data":"969e50d91cdce234e3ebd25af89de94a9345b9463c4d70197f2dbbaa911c914f"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.041002 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerStarted","Data":"58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6"} Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.042780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerStarted","Data":"472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d"} Mar 20 07:10:48 crc kubenswrapper[5136]: E0320 07:10:48.043484 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51\\\"\"" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" podUID="571c2781-59c0-4345-9a04-09a51ceabc0d" Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.435794 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:10:48 crc kubenswrapper[5136]: I0320 07:10:48.914551 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:10:49 crc kubenswrapper[5136]: I0320 07:10:49.054210 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerStarted","Data":"11e0a5791b54dfc64b5c868dfb4c7110fa55e59d3ea215d5dd89246b1feeb323"} Mar 20 07:10:49 crc kubenswrapper[5136]: I0320 07:10:49.055371 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"ecef44b4bd97cd40f7c1c2de9472cdb09460ec1aa1b9eb32b1b7e366da3578d0"} Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.113515 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.116729 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.118806 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.131038 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249108 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249163 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249264 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.249286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.265869 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.305638 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.308418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.311964 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.324678 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369023 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369231 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.369303 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.370500 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.370573 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.370864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.376561 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.388698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.405256 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vr74x\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.455582 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.470935 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.471277 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.471376 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.471452 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.545500 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.560448 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.564048 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.567109 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574896 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.574971 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.576027 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.580361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.580393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.585034 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.605887 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"dnsmasq-dns-84d7bcdf99-wljsb\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.663090 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.672274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678451 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.678626 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.780623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") pod \"571c2781-59c0-4345-9a04-09a51ceabc0d\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.780943 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") pod \"571c2781-59c0-4345-9a04-09a51ceabc0d\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781023 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") pod \"571c2781-59c0-4345-9a04-09a51ceabc0d\" (UID: \"571c2781-59c0-4345-9a04-09a51ceabc0d\") " Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781440 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781571 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.781789 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "571c2781-59c0-4345-9a04-09a51ceabc0d" (UID: "571c2781-59c0-4345-9a04-09a51ceabc0d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.782521 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config" (OuterVolumeSpecName: "config") pod "571c2781-59c0-4345-9a04-09a51ceabc0d" (UID: "571c2781-59c0-4345-9a04-09a51ceabc0d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.782552 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.782616 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.783354 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.785880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.788294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g" (OuterVolumeSpecName: "kube-api-access-sw97g") pod "571c2781-59c0-4345-9a04-09a51ceabc0d" (UID: "571c2781-59c0-4345-9a04-09a51ceabc0d"). InnerVolumeSpecName "kube-api-access-sw97g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.802348 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"dnsmasq-dns-f697c8bff-xcsxq\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.883019 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sw97g\" (UniqueName: \"kubernetes.io/projected/571c2781-59c0-4345-9a04-09a51ceabc0d-kube-api-access-sw97g\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.883105 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.883123 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/571c2781-59c0-4345-9a04-09a51ceabc0d-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.890899 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:10:51 crc kubenswrapper[5136]: I0320 07:10:51.988739 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:10:51 crc kubenswrapper[5136]: W0320 07:10:51.990526 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ede60bf_5bc5_4267_9849_9389df070048.slice/crio-96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2 WatchSource:0}: Error finding container 96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2: Status 404 returned error can't find the container with id 96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2 Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.083925 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.083972 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54b5dffb47-zd72x" event={"ID":"571c2781-59c0-4345-9a04-09a51ceabc0d","Type":"ContainerDied","Data":"603105be24fa6a6cb4daf90aa8e7faaf32594389e035752cfb2e9cf8a54926bd"} Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.086969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerStarted","Data":"96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2"} Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.103882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:10:52 crc kubenswrapper[5136]: W0320 07:10:52.109550 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f0b6cee_c719_4ef8_a97a_f4ecbdac4e50.slice/crio-feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e WatchSource:0}: Error finding container feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e: Status 404 returned error can't find the container with id feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.150562 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.156661 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54b5dffb47-zd72x"] Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.308528 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:10:52 crc kubenswrapper[5136]: W0320 07:10:52.315569 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podde68a814_1b9a_4aad_9841_790f24b79e9e.slice/crio-2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559 WatchSource:0}: Error finding container 2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559: Status 404 returned error can't find the container with id 2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559 Mar 20 07:10:52 crc kubenswrapper[5136]: I0320 07:10:52.405486 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="571c2781-59c0-4345-9a04-09a51ceabc0d" path="/var/lib/kubelet/pods/571c2781-59c0-4345-9a04-09a51ceabc0d/volumes" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.887672 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.888376 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9rkmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-64696987c5-86bqj_openstack(1dd8ad22-4946-4d2c-b2cb-a38f42166c88): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.889808 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-64696987c5-86bqj" podUID="1dd8ad22-4946-4d2c-b2cb-a38f42166c88" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.896980 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.897111 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ln7fj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-854f47b4f9-rcppb_openstack(90e44514-0ddc-4151-ad00-cf458d5adf9e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.898247 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" podUID="90e44514-0ddc-4151-ad00-cf458d5adf9e" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.905053 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.905178 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server@sha256:fbb5be29e9e4fa11f0743e7f74f2e80dcc7445d24770709ea0e038147f752c51,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-67tl9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5448ff6dc7-qtnm5_openstack(89d4f1a0-0e10-49e6-98bc-43920e03caba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:10:52 crc kubenswrapper[5136]: E0320 07:10:52.906240 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" podUID="89d4f1a0-0e10-49e6-98bc-43920e03caba" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.130271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerStarted","Data":"feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e"} Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.137345 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerStarted","Data":"2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559"} Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.143118 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerStarted","Data":"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29"} Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.818349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.828710 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.841881 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.919661 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") pod \"90e44514-0ddc-4151-ad00-cf458d5adf9e\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920119 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") pod \"89d4f1a0-0e10-49e6-98bc-43920e03caba\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920154 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") pod \"90e44514-0ddc-4151-ad00-cf458d5adf9e\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920232 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") pod \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920264 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") pod \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") pod \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\" (UID: \"1dd8ad22-4946-4d2c-b2cb-a38f42166c88\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920304 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") pod \"90e44514-0ddc-4151-ad00-cf458d5adf9e\" (UID: \"90e44514-0ddc-4151-ad00-cf458d5adf9e\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920337 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") pod \"89d4f1a0-0e10-49e6-98bc-43920e03caba\" (UID: \"89d4f1a0-0e10-49e6-98bc-43920e03caba\") " Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920954 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1dd8ad22-4946-4d2c-b2cb-a38f42166c88" (UID: "1dd8ad22-4946-4d2c-b2cb-a38f42166c88"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.920985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config" (OuterVolumeSpecName: "config") pod "1dd8ad22-4946-4d2c-b2cb-a38f42166c88" (UID: "1dd8ad22-4946-4d2c-b2cb-a38f42166c88"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.921843 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "90e44514-0ddc-4151-ad00-cf458d5adf9e" (UID: "90e44514-0ddc-4151-ad00-cf458d5adf9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config" (OuterVolumeSpecName: "config") pod "89d4f1a0-0e10-49e6-98bc-43920e03caba" (UID: "89d4f1a0-0e10-49e6-98bc-43920e03caba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922511 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922575 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922585 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.922593 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d4f1a0-0e10-49e6-98bc-43920e03caba-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.923517 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config" (OuterVolumeSpecName: "config") pod "90e44514-0ddc-4151-ad00-cf458d5adf9e" (UID: "90e44514-0ddc-4151-ad00-cf458d5adf9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.923867 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj" (OuterVolumeSpecName: "kube-api-access-ln7fj") pod "90e44514-0ddc-4151-ad00-cf458d5adf9e" (UID: "90e44514-0ddc-4151-ad00-cf458d5adf9e"). InnerVolumeSpecName "kube-api-access-ln7fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.924111 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9" (OuterVolumeSpecName: "kube-api-access-67tl9") pod "89d4f1a0-0e10-49e6-98bc-43920e03caba" (UID: "89d4f1a0-0e10-49e6-98bc-43920e03caba"). InnerVolumeSpecName "kube-api-access-67tl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:53 crc kubenswrapper[5136]: I0320 07:10:53.926512 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb" (OuterVolumeSpecName: "kube-api-access-9rkmb") pod "1dd8ad22-4946-4d2c-b2cb-a38f42166c88" (UID: "1dd8ad22-4946-4d2c-b2cb-a38f42166c88"). InnerVolumeSpecName "kube-api-access-9rkmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024171 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rkmb\" (UniqueName: \"kubernetes.io/projected/1dd8ad22-4946-4d2c-b2cb-a38f42166c88-kube-api-access-9rkmb\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024220 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln7fj\" (UniqueName: \"kubernetes.io/projected/90e44514-0ddc-4151-ad00-cf458d5adf9e-kube-api-access-ln7fj\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024230 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67tl9\" (UniqueName: \"kubernetes.io/projected/89d4f1a0-0e10-49e6-98bc-43920e03caba-kube-api-access-67tl9\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.024293 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/90e44514-0ddc-4151-ad00-cf458d5adf9e-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.152835 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.152832 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5448ff6dc7-qtnm5" event={"ID":"89d4f1a0-0e10-49e6-98bc-43920e03caba","Type":"ContainerDied","Data":"b1739b4168d54cbd082e621d7a6b119af2a011d7e88735613ee365c8955655aa"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.155334 5136 generic.go:334] "Generic (PLEG): container finished" podID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" exitCode=0 Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.155398 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerDied","Data":"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.157241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" event={"ID":"90e44514-0ddc-4151-ad00-cf458d5adf9e","Type":"ContainerDied","Data":"a835263469ad6ded88770537144f67af862b9ca0e4d044183f6bbb2b8ff9cb68"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.157316 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854f47b4f9-rcppb" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.161193 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerStarted","Data":"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.162856 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-64696987c5-86bqj" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.162899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-64696987c5-86bqj" event={"ID":"1dd8ad22-4946-4d2c-b2cb-a38f42166c88","Type":"ContainerDied","Data":"0124fdc9a4d5b014a0d0715276e212228f8d98c3844a190558ceec06845de8bc"} Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.316584 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.337848 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-854f47b4f9-rcppb"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.353632 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.361690 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5448ff6dc7-qtnm5"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.379849 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.429202 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d4f1a0-0e10-49e6-98bc-43920e03caba" path="/var/lib/kubelet/pods/89d4f1a0-0e10-49e6-98bc-43920e03caba/volumes" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.432296 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90e44514-0ddc-4151-ad00-cf458d5adf9e" path="/var/lib/kubelet/pods/90e44514-0ddc-4151-ad00-cf458d5adf9e/volumes" Mar 20 07:10:54 crc kubenswrapper[5136]: I0320 07:10:54.432759 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-64696987c5-86bqj"] Mar 20 07:10:56 crc kubenswrapper[5136]: I0320 07:10:56.405807 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1dd8ad22-4946-4d2c-b2cb-a38f42166c88" path="/var/lib/kubelet/pods/1dd8ad22-4946-4d2c-b2cb-a38f42166c88/volumes" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.218769 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerStarted","Data":"7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.220774 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerStarted","Data":"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.221718 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.226729 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerStarted","Data":"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.227855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerStarted","Data":"c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.230868 5136 generic.go:334] "Generic (PLEG): container finished" podID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerID="34ee2cccbe30631969d3aa93a1b8264849d8d5334e0c97572f21e0a6e95e8e26" exitCode=0 Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.230932 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerDied","Data":"34ee2cccbe30631969d3aa93a1b8264849d8d5334e0c97572f21e0a6e95e8e26"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.236989 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerStarted","Data":"c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.239454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerStarted","Data":"184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.239535 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.240053 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" podStartSLOduration=10.332007374 podStartE2EDuration="11.240034565s" podCreationTimestamp="2026-03-20 07:10:51 +0000 UTC" firstStartedPulling="2026-03-20 07:10:52.112386751 +0000 UTC m=+1284.371697902" lastFinishedPulling="2026-03-20 07:10:53.020413942 +0000 UTC m=+1285.279725093" observedRunningTime="2026-03-20 07:11:02.23557288 +0000 UTC m=+1294.494884031" watchObservedRunningTime="2026-03-20 07:11:02.240034565 +0000 UTC m=+1294.499345716" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.241776 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerStarted","Data":"0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.241995 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.245207 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.248063 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerStarted","Data":"efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.250315 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerStarted","Data":"a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff"} Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.250792 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-gnwt6" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.331510 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=13.313012844 podStartE2EDuration="27.33148832s" podCreationTimestamp="2026-03-20 07:10:35 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.659095023 +0000 UTC m=+1279.918406174" lastFinishedPulling="2026-03-20 07:11:01.677570509 +0000 UTC m=+1293.936881650" observedRunningTime="2026-03-20 07:11:02.313041711 +0000 UTC m=+1294.572352852" watchObservedRunningTime="2026-03-20 07:11:02.33148832 +0000 UTC m=+1294.590799471" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.336807 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-gnwt6" podStartSLOduration=10.088360372 podStartE2EDuration="23.336790842s" podCreationTimestamp="2026-03-20 07:10:39 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.718090143 +0000 UTC m=+1279.977401294" lastFinishedPulling="2026-03-20 07:11:00.966520583 +0000 UTC m=+1293.225831764" observedRunningTime="2026-03-20 07:11:02.325506559 +0000 UTC m=+1294.584817710" watchObservedRunningTime="2026-03-20 07:11:02.336790842 +0000 UTC m=+1294.596101993" Mar 20 07:11:02 crc kubenswrapper[5136]: I0320 07:11:02.344275 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=15.151644978 podStartE2EDuration="29.344262178s" podCreationTimestamp="2026-03-20 07:10:33 +0000 UTC" firstStartedPulling="2026-03-20 07:10:46.075282105 +0000 UTC m=+1278.334593276" lastFinishedPulling="2026-03-20 07:11:00.267899325 +0000 UTC m=+1292.527210476" observedRunningTime="2026-03-20 07:11:02.341798343 +0000 UTC m=+1294.601109494" watchObservedRunningTime="2026-03-20 07:11:02.344262178 +0000 UTC m=+1294.603573329" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.263749 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerStarted","Data":"55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.267423 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerStarted","Data":"ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.272342 5136 generic.go:334] "Generic (PLEG): container finished" podID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" exitCode=0 Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.272414 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.276127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerStarted","Data":"bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c"} Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.276173 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.316174 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=10.099144412 podStartE2EDuration="23.316148489s" podCreationTimestamp="2026-03-20 07:10:40 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.971200163 +0000 UTC m=+1280.230511314" lastFinishedPulling="2026-03-20 07:11:01.18820423 +0000 UTC m=+1293.447515391" observedRunningTime="2026-03-20 07:11:03.294227724 +0000 UTC m=+1295.553538915" watchObservedRunningTime="2026-03-20 07:11:03.316148489 +0000 UTC m=+1295.575459660" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.346359 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vr74x" podStartSLOduration=2.702363294 podStartE2EDuration="12.346338234s" podCreationTimestamp="2026-03-20 07:10:51 +0000 UTC" firstStartedPulling="2026-03-20 07:10:51.992498102 +0000 UTC m=+1284.251809253" lastFinishedPulling="2026-03-20 07:11:01.636473032 +0000 UTC m=+1293.895784193" observedRunningTime="2026-03-20 07:11:03.327834913 +0000 UTC m=+1295.587146074" watchObservedRunningTime="2026-03-20 07:11:03.346338234 +0000 UTC m=+1295.605649405" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.357938 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" podStartSLOduration=11.036382686 podStartE2EDuration="12.357912496s" podCreationTimestamp="2026-03-20 07:10:51 +0000 UTC" firstStartedPulling="2026-03-20 07:10:52.317738041 +0000 UTC m=+1284.577049192" lastFinishedPulling="2026-03-20 07:10:53.639267851 +0000 UTC m=+1285.898579002" observedRunningTime="2026-03-20 07:11:03.350709827 +0000 UTC m=+1295.610020998" watchObservedRunningTime="2026-03-20 07:11:03.357912496 +0000 UTC m=+1295.617223657" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.383978 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=12.629790513 podStartE2EDuration="25.383961735s" podCreationTimestamp="2026-03-20 07:10:38 +0000 UTC" firstStartedPulling="2026-03-20 07:10:48.433068708 +0000 UTC m=+1280.692379859" lastFinishedPulling="2026-03-20 07:11:01.18723993 +0000 UTC m=+1293.446551081" observedRunningTime="2026-03-20 07:11:03.382194772 +0000 UTC m=+1295.641505963" watchObservedRunningTime="2026-03-20 07:11:03.383961735 +0000 UTC m=+1295.643272896" Mar 20 07:11:03 crc kubenswrapper[5136]: I0320 07:11:03.688142 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.286240 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c"} Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.288362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerStarted","Data":"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58"} Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.325487 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-ldp4w" podStartSLOduration=13.486604965 podStartE2EDuration="25.325460094s" podCreationTimestamp="2026-03-20 07:10:39 +0000 UTC" firstStartedPulling="2026-03-20 07:10:48.92353708 +0000 UTC m=+1281.182848241" lastFinishedPulling="2026-03-20 07:11:00.762392229 +0000 UTC m=+1293.021703370" observedRunningTime="2026-03-20 07:11:04.308086306 +0000 UTC m=+1296.567397467" watchObservedRunningTime="2026-03-20 07:11:04.325460094 +0000 UTC m=+1296.584771245" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.689756 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.777595 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:04 crc kubenswrapper[5136]: I0320 07:11:04.777650 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:05 crc kubenswrapper[5136]: I0320 07:11:05.981004 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.046917 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.312316 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.356378 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 20 07:11:06 crc kubenswrapper[5136]: I0320 07:11:06.725918 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.318291 5136 generic.go:334] "Generic (PLEG): container finished" podID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" exitCode=0 Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.318351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerDied","Data":"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080"} Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.320977 5136 generic.go:334] "Generic (PLEG): container finished" podID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerID="efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3" exitCode=0 Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.321002 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerDied","Data":"efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3"} Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.409526 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.556626 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.562108 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.565113 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.569704 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.587153 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.587339 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-bcbdk" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.587365 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594739 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594761 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594859 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594975 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.594995 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697129 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697207 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697244 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697323 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697342 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.697360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.699663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.699726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.700902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.712795 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.713295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.713722 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.716926 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"ovn-northd-0\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " pod="openstack/ovn-northd-0" Mar 20 07:11:07 crc kubenswrapper[5136]: I0320 07:11:07.906472 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.151714 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:11:08 crc kubenswrapper[5136]: W0320 07:11:08.158284 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7acbc76f_ff83_451e_826f_5fd1f977f74f.slice/crio-f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174 WatchSource:0}: Error finding container f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174: Status 404 returned error can't find the container with id f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174 Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.330202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerStarted","Data":"2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44"} Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.331469 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerStarted","Data":"f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174"} Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.333559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerStarted","Data":"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2"} Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.353760 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.952991222 podStartE2EDuration="37.353739624s" podCreationTimestamp="2026-03-20 07:10:31 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.717429173 +0000 UTC m=+1279.976740324" lastFinishedPulling="2026-03-20 07:11:01.118177575 +0000 UTC m=+1293.377488726" observedRunningTime="2026-03-20 07:11:08.350063644 +0000 UTC m=+1300.609374815" watchObservedRunningTime="2026-03-20 07:11:08.353739624 +0000 UTC m=+1300.613050775" Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.373976 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=24.251966962 podStartE2EDuration="38.373960728s" podCreationTimestamp="2026-03-20 07:10:30 +0000 UTC" firstStartedPulling="2026-03-20 07:10:47.420215615 +0000 UTC m=+1279.679526766" lastFinishedPulling="2026-03-20 07:11:01.542209371 +0000 UTC m=+1293.801520532" observedRunningTime="2026-03-20 07:11:08.368545304 +0000 UTC m=+1300.627856455" watchObservedRunningTime="2026-03-20 07:11:08.373960728 +0000 UTC m=+1300.633271879" Mar 20 07:11:08 crc kubenswrapper[5136]: I0320 07:11:08.686065 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 20 07:11:10 crc kubenswrapper[5136]: E0320 07:11:10.164827 5136 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.163:42736->38.102.83.163:37797: write tcp 38.102.83.163:42736->38.102.83.163:37797: write: connection reset by peer Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.353730 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerStarted","Data":"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf"} Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.353990 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerStarted","Data":"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242"} Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.354182 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 20 07:11:10 crc kubenswrapper[5136]: I0320 07:11:10.371859 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.2593994840000002 podStartE2EDuration="3.37183822s" podCreationTimestamp="2026-03-20 07:11:07 +0000 UTC" firstStartedPulling="2026-03-20 07:11:08.161034137 +0000 UTC m=+1300.420345288" lastFinishedPulling="2026-03-20 07:11:09.273472863 +0000 UTC m=+1301.532784024" observedRunningTime="2026-03-20 07:11:10.370022065 +0000 UTC m=+1302.629333246" watchObservedRunningTime="2026-03-20 07:11:10.37183822 +0000 UTC m=+1302.631149371" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.675067 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.815749 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.816113 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.892720 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:11 crc kubenswrapper[5136]: I0320 07:11:11.942627 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.150305 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.367700 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" containerID="cri-o://73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" gracePeriod=10 Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.436739 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.805264 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.892894 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.893024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.893068 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.893151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") pod \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\" (UID: \"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50\") " Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.901216 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss" (OuterVolumeSpecName: "kube-api-access-th8ss") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "kube-api-access-th8ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.934310 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config" (OuterVolumeSpecName: "config") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.943355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.950252 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" (UID: "7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995358 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995393 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995406 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:12 crc kubenswrapper[5136]: I0320 07:11:12.995418 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th8ss\" (UniqueName: \"kubernetes.io/projected/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50-kube-api-access-th8ss\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.291986 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.292070 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376429 5136 generic.go:334] "Generic (PLEG): container finished" podID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" exitCode=0 Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376476 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376510 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerDied","Data":"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3"} Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376541 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-84d7bcdf99-wljsb" event={"ID":"7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50","Type":"ContainerDied","Data":"feddb6f7ff0c0085762a83a31ade90d04bd037a88d8fdb5ca4054aa0e23e524e"} Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.376556 5136 scope.go:117] "RemoveContainer" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.406619 5136 scope.go:117] "RemoveContainer" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.414791 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.425246 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-84d7bcdf99-wljsb"] Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.437696 5136 scope.go:117] "RemoveContainer" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" Mar 20 07:11:13 crc kubenswrapper[5136]: E0320 07:11:13.438412 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3\": container with ID starting with 73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3 not found: ID does not exist" containerID="73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.438459 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3"} err="failed to get container status \"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3\": rpc error: code = NotFound desc = could not find container \"73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3\": container with ID starting with 73f6a2a35fb846e8690cd4c8aca3693f1ecdc892a70193be9111a28cf3b52bf3 not found: ID does not exist" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.438484 5136 scope.go:117] "RemoveContainer" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" Mar 20 07:11:13 crc kubenswrapper[5136]: E0320 07:11:13.438986 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175\": container with ID starting with ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175 not found: ID does not exist" containerID="ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175" Mar 20 07:11:13 crc kubenswrapper[5136]: I0320 07:11:13.439029 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175"} err="failed to get container status \"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175\": rpc error: code = NotFound desc = could not find container \"ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175\": container with ID starting with ef19880c1a92489e9859c2d59cb4fca431790d3ea28565e609120fa9d9f18175 not found: ID does not exist" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.407330 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" path="/var/lib/kubelet/pods/7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50/volumes" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.699568 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:11:14 crc kubenswrapper[5136]: E0320 07:11:14.700164 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.700186 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" Mar 20 07:11:14 crc kubenswrapper[5136]: E0320 07:11:14.700218 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="init" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.700225 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="init" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.700436 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0b6cee-c719-4ef8-a97a-f4ecbdac4e50" containerName="dnsmasq-dns" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.701035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.702916 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.721022 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.747574 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.748531 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.757798 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.831862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.831921 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.831962 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.832033 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.839576 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.840413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.849772 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933315 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933385 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933449 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933490 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933525 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.933581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.934528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.934538 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.951997 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.952979 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.958034 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.958371 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"keystone-db-create-kfc9f\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.963406 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"keystone-e762-account-create-update-5vpcp\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:14 crc kubenswrapper[5136]: I0320 07:11:14.970032 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.019308 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036666 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036781 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036894 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.036990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.048421 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.062713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"placement-db-create-bk75j\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.064772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.138935 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.138972 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.140224 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.153831 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.158363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"placement-a0f6-account-create-update-c9hl7\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.327602 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.397210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.477062 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.668465 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.736113 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.737268 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.754088 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.780468 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.824162 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.824237 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.887514 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.887902 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.887956 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.888084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.888167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.899209 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.949094 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999253 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999287 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999355 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:15 crc kubenswrapper[5136]: I0320 07:11:15.999386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.000171 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.000653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.001168 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.001633 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.041135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"dnsmasq-dns-b4ddd5fb7-cfjcg\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.066383 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.096012 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.408946 5136 generic.go:334] "Generic (PLEG): container finished" podID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerID="933fdd395d96426dd2696ed053dd4cefada8c95df3be0a52f3cc68ad68f9aebb" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.408998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kfc9f" event={"ID":"b4e39c5d-af98-44d6-a06d-f31555db758b","Type":"ContainerDied","Data":"933fdd395d96426dd2696ed053dd4cefada8c95df3be0a52f3cc68ad68f9aebb"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.409255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kfc9f" event={"ID":"b4e39c5d-af98-44d6-a06d-f31555db758b","Type":"ContainerStarted","Data":"91a625bb9304a7f8b51faab35b5fec61175c95ec85e746ddaca0e50d60fc3071"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.411252 5136 generic.go:334] "Generic (PLEG): container finished" podID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerID="7056c10c02d573c52be9cb6646cfd2016f281214c76d5613dade95a4d450b824" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.411330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e762-account-create-update-5vpcp" event={"ID":"0954a67c-5522-4338-b9e6-fc1b35b48cdb","Type":"ContainerDied","Data":"7056c10c02d573c52be9cb6646cfd2016f281214c76d5613dade95a4d450b824"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.411362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e762-account-create-update-5vpcp" event={"ID":"0954a67c-5522-4338-b9e6-fc1b35b48cdb","Type":"ContainerStarted","Data":"48512ac7078a277888ee056a62b4a6b84197b81b5d0d2cb3c783d1abfeaec964"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.417677 5136 generic.go:334] "Generic (PLEG): container finished" podID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerID="f8f2b333bca19081fee1627c5e046485a6793b7781e892f02c6a8b08ca392e57" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.417722 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bk75j" event={"ID":"4a15871b-0fd2-4db9-a42a-8e822efa35fb","Type":"ContainerDied","Data":"f8f2b333bca19081fee1627c5e046485a6793b7781e892f02c6a8b08ca392e57"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.417768 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bk75j" event={"ID":"4a15871b-0fd2-4db9-a42a-8e822efa35fb","Type":"ContainerStarted","Data":"cec4a6950888591d2e5df3e8e1f2226263c066e2869ac680a5e656395d2183a3"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.419738 5136 generic.go:334] "Generic (PLEG): container finished" podID="81055905-a498-49a7-917a-2032a292710e" containerID="ca4d6aff6fa4147c69ade98576093b5726d3ffc5a53c4a7f48a1261885cf9eaf" exitCode=0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.419792 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-c9hl7" event={"ID":"81055905-a498-49a7-917a-2032a292710e","Type":"ContainerDied","Data":"ca4d6aff6fa4147c69ade98576093b5726d3ffc5a53c4a7f48a1261885cf9eaf"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.419816 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-c9hl7" event={"ID":"81055905-a498-49a7-917a-2032a292710e","Type":"ContainerStarted","Data":"b0ee3bca0a6e172f1f5ecea08b36c9a75b540d345edc1a90ca2b72e664d06260"} Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.550914 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:11:16 crc kubenswrapper[5136]: W0320 07:11:16.553385 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd103abed_83b7_44e9_bc7f_786434426647.slice/crio-5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0 WatchSource:0}: Error finding container 5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0: Status 404 returned error can't find the container with id 5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0 Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.941807 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.949393 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.951037 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-g9cz6" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.951956 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.951993 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.952989 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 20 07:11:16 crc kubenswrapper[5136]: I0320 07:11:16.970207 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015364 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015393 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015426 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015452 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.015514 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117470 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117494 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117542 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.117570 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.118142 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.118150 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.118202 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.118289 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:17.618261307 +0000 UTC m=+1309.877572488 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.118338 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.118350 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.131168 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.150193 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.154407 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.428516 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.430109 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.433348 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.433349 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.433378 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.435215 5136 generic.go:334] "Generic (PLEG): container finished" podID="d103abed-83b7-44e9-bc7f-786434426647" containerID="533f92371dee2235f15d0d84ab9f13da275c7e919c6618b46cdf3ab8345571a9" exitCode=0 Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.436344 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerDied","Data":"533f92371dee2235f15d0d84ab9f13da275c7e919c6618b46cdf3ab8345571a9"} Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.436375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerStarted","Data":"5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0"} Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.455506 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543524 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543598 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543665 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543741 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543777 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.543801 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645707 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645823 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645887 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645941 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.645989 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646146 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.646180 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.646205 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: E0320 07:11:17.646260 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:18.646241288 +0000 UTC m=+1310.905552439 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646470 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.646532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.648388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.649544 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.650065 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.650388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.662552 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"swift-ring-rebalance-v7xvp\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.795779 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.812345 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.951669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") pod \"b4e39c5d-af98-44d6-a06d-f31555db758b\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.952102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") pod \"b4e39c5d-af98-44d6-a06d-f31555db758b\" (UID: \"b4e39c5d-af98-44d6-a06d-f31555db758b\") " Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.954487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4e39c5d-af98-44d6-a06d-f31555db758b" (UID: "b4e39c5d-af98-44d6-a06d-f31555db758b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:17 crc kubenswrapper[5136]: I0320 07:11:17.991120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb" (OuterVolumeSpecName: "kube-api-access-57zvb") pod "b4e39c5d-af98-44d6-a06d-f31555db758b" (UID: "b4e39c5d-af98-44d6-a06d-f31555db758b"). InnerVolumeSpecName "kube-api-access-57zvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.054511 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57zvb\" (UniqueName: \"kubernetes.io/projected/b4e39c5d-af98-44d6-a06d-f31555db758b-kube-api-access-57zvb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.054543 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e39c5d-af98-44d6-a06d-f31555db758b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.092683 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.099505 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.104753 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155614 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") pod \"81055905-a498-49a7-917a-2032a292710e\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155699 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") pod \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155763 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") pod \"81055905-a498-49a7-917a-2032a292710e\" (UID: \"81055905-a498-49a7-917a-2032a292710e\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155812 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") pod \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") pod \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\" (UID: \"0954a67c-5522-4338-b9e6-fc1b35b48cdb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.155964 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") pod \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\" (UID: \"4a15871b-0fd2-4db9-a42a-8e822efa35fb\") " Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156256 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0954a67c-5522-4338-b9e6-fc1b35b48cdb" (UID: "0954a67c-5522-4338-b9e6-fc1b35b48cdb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156270 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81055905-a498-49a7-917a-2032a292710e" (UID: "81055905-a498-49a7-917a-2032a292710e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156358 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a15871b-0fd2-4db9-a42a-8e822efa35fb" (UID: "4a15871b-0fd2-4db9-a42a-8e822efa35fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156500 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0954a67c-5522-4338-b9e6-fc1b35b48cdb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156517 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81055905-a498-49a7-917a-2032a292710e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.156527 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a15871b-0fd2-4db9-a42a-8e822efa35fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.159620 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz" (OuterVolumeSpecName: "kube-api-access-fkkrz") pod "81055905-a498-49a7-917a-2032a292710e" (UID: "81055905-a498-49a7-917a-2032a292710e"). InnerVolumeSpecName "kube-api-access-fkkrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.176114 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw" (OuterVolumeSpecName: "kube-api-access-r5jxw") pod "4a15871b-0fd2-4db9-a42a-8e822efa35fb" (UID: "4a15871b-0fd2-4db9-a42a-8e822efa35fb"). InnerVolumeSpecName "kube-api-access-r5jxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.176303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp" (OuterVolumeSpecName: "kube-api-access-z4ghp") pod "0954a67c-5522-4338-b9e6-fc1b35b48cdb" (UID: "0954a67c-5522-4338-b9e6-fc1b35b48cdb"). InnerVolumeSpecName "kube-api-access-z4ghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.258267 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkkrz\" (UniqueName: \"kubernetes.io/projected/81055905-a498-49a7-917a-2032a292710e-kube-api-access-fkkrz\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.258322 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4ghp\" (UniqueName: \"kubernetes.io/projected/0954a67c-5522-4338-b9e6-fc1b35b48cdb-kube-api-access-z4ghp\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.258332 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5jxw\" (UniqueName: \"kubernetes.io/projected/4a15871b-0fd2-4db9-a42a-8e822efa35fb-kube-api-access-r5jxw\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.305079 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.444117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-e762-account-create-update-5vpcp" event={"ID":"0954a67c-5522-4338-b9e6-fc1b35b48cdb","Type":"ContainerDied","Data":"48512ac7078a277888ee056a62b4a6b84197b81b5d0d2cb3c783d1abfeaec964"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.444157 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48512ac7078a277888ee056a62b4a6b84197b81b5d0d2cb3c783d1abfeaec964" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.444206 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-5vpcp" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.446373 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-bk75j" event={"ID":"4a15871b-0fd2-4db9-a42a-8e822efa35fb","Type":"ContainerDied","Data":"cec4a6950888591d2e5df3e8e1f2226263c066e2869ac680a5e656395d2183a3"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.446793 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cec4a6950888591d2e5df3e8e1f2226263c066e2869ac680a5e656395d2183a3" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.446414 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-bk75j" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.448109 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-c9hl7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.448451 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-c9hl7" event={"ID":"81055905-a498-49a7-917a-2032a292710e","Type":"ContainerDied","Data":"b0ee3bca0a6e172f1f5ecea08b36c9a75b540d345edc1a90ca2b72e664d06260"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.448626 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0ee3bca0a6e172f1f5ecea08b36c9a75b540d345edc1a90ca2b72e664d06260" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.452926 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerStarted","Data":"604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.454230 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.455912 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-kfc9f" event={"ID":"b4e39c5d-af98-44d6-a06d-f31555db758b","Type":"ContainerDied","Data":"91a625bb9304a7f8b51faab35b5fec61175c95ec85e746ddaca0e50d60fc3071"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.455950 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91a625bb9304a7f8b51faab35b5fec61175c95ec85e746ddaca0e50d60fc3071" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.456040 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-kfc9f" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.457622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerStarted","Data":"158b8904c559e5367b1f3b8f9dd4746bcb9987780df1531c47db70ae775f7d6f"} Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.479992 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podStartSLOduration=3.479967915 podStartE2EDuration="3.479967915s" podCreationTimestamp="2026-03-20 07:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:11:18.473789207 +0000 UTC m=+1310.733100368" watchObservedRunningTime="2026-03-20 07:11:18.479967915 +0000 UTC m=+1310.739279066" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.664780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.665012 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.665045 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.665112 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:20.665089152 +0000 UTC m=+1312.924400313 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805501 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805520 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805535 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805544 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805555 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805562 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: E0320 07:11:18.805577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81055905-a498-49a7-917a-2032a292710e" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805590 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="81055905-a498-49a7-917a-2032a292710e" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805762 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805785 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="81055905-a498-49a7-917a-2032a292710e" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805805 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" containerName="mariadb-database-create" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.805843 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" containerName="mariadb-account-create-update" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.806378 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.818553 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.868484 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.868562 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.917613 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.918616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.925235 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.936267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970192 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970245 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.970288 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.971219 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:18 crc kubenswrapper[5136]: I0320 07:11:18.988365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"glance-db-create-c6tbf\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.071981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.072045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.072591 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.097535 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"glance-a033-account-create-update-ww8m7\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.131004 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.232278 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.597857 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:11:19 crc kubenswrapper[5136]: I0320 07:11:19.729569 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:11:19 crc kubenswrapper[5136]: W0320 07:11:19.732455 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52702304_46c3_4028_af56_60e936dea0a9.slice/crio-81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f WatchSource:0}: Error finding container 81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f: Status 404 returned error can't find the container with id 81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.476145 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.477466 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.480997 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.491908 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.499713 5136 generic.go:334] "Generic (PLEG): container finished" podID="744eb619-4231-474c-a8b2-a37ed7432086" containerID="61abc8440208cd19caa61d866cd42cc249d0d527cfebb488be887ccce4bdea72" exitCode=0 Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.499771 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6tbf" event={"ID":"744eb619-4231-474c-a8b2-a37ed7432086","Type":"ContainerDied","Data":"61abc8440208cd19caa61d866cd42cc249d0d527cfebb488be887ccce4bdea72"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.499804 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6tbf" event={"ID":"744eb619-4231-474c-a8b2-a37ed7432086","Type":"ContainerStarted","Data":"f5b4c0167edf314cdc0984000087c4b3f5860a0e137fe9285c5122f8416a493d"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.504149 5136 generic.go:334] "Generic (PLEG): container finished" podID="52702304-46c3-4028-af56-60e936dea0a9" containerID="dc6f042f4a1f3f8ba50fa65cef930cd8040f1e880b0843b1b3beecf9065681fb" exitCode=0 Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.504680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a033-account-create-update-ww8m7" event={"ID":"52702304-46c3-4028-af56-60e936dea0a9","Type":"ContainerDied","Data":"dc6f042f4a1f3f8ba50fa65cef930cd8040f1e880b0843b1b3beecf9065681fb"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.504700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a033-account-create-update-ww8m7" event={"ID":"52702304-46c3-4028-af56-60e936dea0a9","Type":"ContainerStarted","Data":"81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f"} Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.599949 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.600002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.702316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.702409 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.702481 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.703289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: E0320 07:11:20.703419 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:20 crc kubenswrapper[5136]: E0320 07:11:20.703443 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:20 crc kubenswrapper[5136]: E0320 07:11:20.703502 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:24.703486184 +0000 UTC m=+1316.962797335 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.722877 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"root-account-create-update-vrh5d\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:20 crc kubenswrapper[5136]: I0320 07:11:20.802607 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.478231 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.485015 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.524669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c6tbf" event={"ID":"744eb619-4231-474c-a8b2-a37ed7432086","Type":"ContainerDied","Data":"f5b4c0167edf314cdc0984000087c4b3f5860a0e137fe9285c5122f8416a493d"} Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.524730 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5b4c0167edf314cdc0984000087c4b3f5860a0e137fe9285c5122f8416a493d" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.524835 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c6tbf" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.528767 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-a033-account-create-update-ww8m7" event={"ID":"52702304-46c3-4028-af56-60e936dea0a9","Type":"ContainerDied","Data":"81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f"} Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.528797 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81e974406f4fb5acfe8a8d5dd616a66deefd845c9e0571fb5eba54c5e651e02f" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.528948 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-a033-account-create-update-ww8m7" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533545 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") pod \"744eb619-4231-474c-a8b2-a37ed7432086\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533672 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") pod \"52702304-46c3-4028-af56-60e936dea0a9\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533725 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") pod \"744eb619-4231-474c-a8b2-a37ed7432086\" (UID: \"744eb619-4231-474c-a8b2-a37ed7432086\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.533962 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") pod \"52702304-46c3-4028-af56-60e936dea0a9\" (UID: \"52702304-46c3-4028-af56-60e936dea0a9\") " Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.534741 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52702304-46c3-4028-af56-60e936dea0a9" (UID: "52702304-46c3-4028-af56-60e936dea0a9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.534855 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "744eb619-4231-474c-a8b2-a37ed7432086" (UID: "744eb619-4231-474c-a8b2-a37ed7432086"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.539487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd" (OuterVolumeSpecName: "kube-api-access-pzrhd") pod "744eb619-4231-474c-a8b2-a37ed7432086" (UID: "744eb619-4231-474c-a8b2-a37ed7432086"). InnerVolumeSpecName "kube-api-access-pzrhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.555842 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w" (OuterVolumeSpecName: "kube-api-access-4gj8w") pod "52702304-46c3-4028-af56-60e936dea0a9" (UID: "52702304-46c3-4028-af56-60e936dea0a9"). InnerVolumeSpecName "kube-api-access-4gj8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635933 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52702304-46c3-4028-af56-60e936dea0a9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635965 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/744eb619-4231-474c-a8b2-a37ed7432086-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635975 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gj8w\" (UniqueName: \"kubernetes.io/projected/52702304-46c3-4028-af56-60e936dea0a9-kube-api-access-4gj8w\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.635986 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzrhd\" (UniqueName: \"kubernetes.io/projected/744eb619-4231-474c-a8b2-a37ed7432086-kube-api-access-pzrhd\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:22 crc kubenswrapper[5136]: I0320 07:11:22.719983 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:22 crc kubenswrapper[5136]: W0320 07:11:22.724860 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c9bf89_c898_469c_8a83_e1b945b234a6.slice/crio-8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa WatchSource:0}: Error finding container 8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa: Status 404 returned error can't find the container with id 8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.541544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerStarted","Data":"df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e"} Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.544602 5136 generic.go:334] "Generic (PLEG): container finished" podID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerID="2e4cee4a85209760afcb1fc4e1920e495e69a4a4c4fbdedacaa3ff6869eb619f" exitCode=0 Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.544640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vrh5d" event={"ID":"c6c9bf89-c898-469c-8a83-e1b945b234a6","Type":"ContainerDied","Data":"2e4cee4a85209760afcb1fc4e1920e495e69a4a4c4fbdedacaa3ff6869eb619f"} Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.544661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vrh5d" event={"ID":"c6c9bf89-c898-469c-8a83-e1b945b234a6","Type":"ContainerStarted","Data":"8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa"} Mar 20 07:11:23 crc kubenswrapper[5136]: I0320 07:11:23.561935 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-v7xvp" podStartSLOduration=2.5444660260000003 podStartE2EDuration="6.561919168s" podCreationTimestamp="2026-03-20 07:11:17 +0000 UTC" firstStartedPulling="2026-03-20 07:11:18.317548837 +0000 UTC m=+1310.576859988" lastFinishedPulling="2026-03-20 07:11:22.335001979 +0000 UTC m=+1314.594313130" observedRunningTime="2026-03-20 07:11:23.555234985 +0000 UTC m=+1315.814546156" watchObservedRunningTime="2026-03-20 07:11:23.561919168 +0000 UTC m=+1315.821230309" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059316 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.059701 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="744eb619-4231-474c-a8b2-a37ed7432086" containerName="mariadb-database-create" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059723 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="744eb619-4231-474c-a8b2-a37ed7432086" containerName="mariadb-database-create" Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.059744 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52702304-46c3-4028-af56-60e936dea0a9" containerName="mariadb-account-create-update" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059754 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52702304-46c3-4028-af56-60e936dea0a9" containerName="mariadb-account-create-update" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059970 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="744eb619-4231-474c-a8b2-a37ed7432086" containerName="mariadb-database-create" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.059999 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52702304-46c3-4028-af56-60e936dea0a9" containerName="mariadb-account-create-update" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.060618 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.070170 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4q9lc" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.070501 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.071760 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160464 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.160723 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262737 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262809 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.262926 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.269117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.271288 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.277145 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.291944 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"glance-db-sync-ldzkm\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.381180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.508615 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod261514f8_7734_423d_b15a_e83fdc2a85fd.slice/crio-3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.556282 5136 generic.go:334] "Generic (PLEG): container finished" podID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" exitCode=0 Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.557197 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerDied","Data":"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29"} Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.775446 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.775995 5136 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.776017 5136 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Mar 20 07:11:24 crc kubenswrapper[5136]: E0320 07:11:24.776133 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift podName:dd944fb6-1517-4f5b-b579-79d8f1f3da19 nodeName:}" failed. No retries permitted until 2026-03-20 07:11:32.77611354 +0000 UTC m=+1325.035424691 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift") pod "swift-storage-0" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19") : configmap "swift-ring-files" not found Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.975559 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:11:24 crc kubenswrapper[5136]: W0320 07:11:24.979588 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c6efdb_3b8c_4123_bfb6_a67cd416fb18.slice/crio-aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf WatchSource:0}: Error finding container aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf: Status 404 returned error can't find the container with id aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf Mar 20 07:11:24 crc kubenswrapper[5136]: I0320 07:11:24.980968 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.082406 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") pod \"c6c9bf89-c898-469c-8a83-e1b945b234a6\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.082774 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") pod \"c6c9bf89-c898-469c-8a83-e1b945b234a6\" (UID: \"c6c9bf89-c898-469c-8a83-e1b945b234a6\") " Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.083214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c6c9bf89-c898-469c-8a83-e1b945b234a6" (UID: "c6c9bf89-c898-469c-8a83-e1b945b234a6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.087863 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp" (OuterVolumeSpecName: "kube-api-access-s5crp") pod "c6c9bf89-c898-469c-8a83-e1b945b234a6" (UID: "c6c9bf89-c898-469c-8a83-e1b945b234a6"). InnerVolumeSpecName "kube-api-access-s5crp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.184175 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5crp\" (UniqueName: \"kubernetes.io/projected/c6c9bf89-c898-469c-8a83-e1b945b234a6-kube-api-access-s5crp\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.184210 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c6c9bf89-c898-469c-8a83-e1b945b234a6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.568787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerStarted","Data":"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155"} Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.569020 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.569848 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerStarted","Data":"aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf"} Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.571299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-vrh5d" event={"ID":"c6c9bf89-c898-469c-8a83-e1b945b234a6","Type":"ContainerDied","Data":"8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa"} Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.571316 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-vrh5d" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.571327 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d39b8c14523e074cbaf42b425bd1908d4407272212b321cef3de376df1682aa" Mar 20 07:11:25 crc kubenswrapper[5136]: I0320 07:11:25.602071 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=40.636103932 podStartE2EDuration="57.602054562s" podCreationTimestamp="2026-03-20 07:10:28 +0000 UTC" firstStartedPulling="2026-03-20 07:10:30.222505112 +0000 UTC m=+1262.481816263" lastFinishedPulling="2026-03-20 07:10:47.188455742 +0000 UTC m=+1279.447766893" observedRunningTime="2026-03-20 07:11:25.595489503 +0000 UTC m=+1317.854800654" watchObservedRunningTime="2026-03-20 07:11:25.602054562 +0000 UTC m=+1317.861365713" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.098739 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.165409 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.165961 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" containerID="cri-o://bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c" gracePeriod=10 Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.583919 5136 generic.go:334] "Generic (PLEG): container finished" podID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerID="bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c" exitCode=0 Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.583978 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerDied","Data":"bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c"} Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.584003 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" event={"ID":"de68a814-1b9a-4aad-9841-790f24b79e9e","Type":"ContainerDied","Data":"2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559"} Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.584016 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be92f8d39a9b4d63c74f7df0ae6d838690d70faf780735edb28289a75abf559" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.586365 5136 generic.go:334] "Generic (PLEG): container finished" podID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" exitCode=0 Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.587062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerDied","Data":"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71"} Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.741135 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920267 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920345 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920409 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920468 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.920566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") pod \"de68a814-1b9a-4aad-9841-790f24b79e9e\" (UID: \"de68a814-1b9a-4aad-9841-790f24b79e9e\") " Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.926477 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4" (OuterVolumeSpecName: "kube-api-access-gl9j4") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "kube-api-access-gl9j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.964494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config" (OuterVolumeSpecName: "config") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.964961 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.968673 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.972179 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-vrh5d"] Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.972641 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:26 crc kubenswrapper[5136]: I0320 07:11:26.986404 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "de68a814-1b9a-4aad-9841-790f24b79e9e" (UID: "de68a814-1b9a-4aad-9841-790f24b79e9e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022830 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022935 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022948 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022963 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl9j4\" (UniqueName: \"kubernetes.io/projected/de68a814-1b9a-4aad-9841-790f24b79e9e-kube-api-access-gl9j4\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.022976 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/de68a814-1b9a-4aad-9841-790f24b79e9e-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.595032 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f697c8bff-xcsxq" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.599035 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerStarted","Data":"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357"} Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.599315 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.631701 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=53.688036107 podStartE2EDuration="59.631686117s" podCreationTimestamp="2026-03-20 07:10:28 +0000 UTC" firstStartedPulling="2026-03-20 07:10:46.989919157 +0000 UTC m=+1279.249230328" lastFinishedPulling="2026-03-20 07:10:52.933569187 +0000 UTC m=+1285.192880338" observedRunningTime="2026-03-20 07:11:27.619348864 +0000 UTC m=+1319.878660025" watchObservedRunningTime="2026-03-20 07:11:27.631686117 +0000 UTC m=+1319.890997268" Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.645052 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.651411 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f697c8bff-xcsxq"] Mar 20 07:11:27 crc kubenswrapper[5136]: I0320 07:11:27.973781 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 20 07:11:28 crc kubenswrapper[5136]: I0320 07:11:28.408589 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" path="/var/lib/kubelet/pods/c6c9bf89-c898-469c-8a83-e1b945b234a6/volumes" Mar 20 07:11:28 crc kubenswrapper[5136]: I0320 07:11:28.409327 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" path="/var/lib/kubelet/pods/de68a814-1b9a-4aad-9841-790f24b79e9e/volumes" Mar 20 07:11:29 crc kubenswrapper[5136]: I0320 07:11:29.613484 5136 generic.go:334] "Generic (PLEG): container finished" podID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerID="df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e" exitCode=0 Mar 20 07:11:29 crc kubenswrapper[5136]: I0320 07:11:29.613561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerDied","Data":"df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e"} Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495012 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:30 crc kubenswrapper[5136]: E0320 07:11:30.495317 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerName="mariadb-account-create-update" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495330 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerName="mariadb-account-create-update" Mar 20 07:11:30 crc kubenswrapper[5136]: E0320 07:11:30.495352 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="init" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495359 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="init" Mar 20 07:11:30 crc kubenswrapper[5136]: E0320 07:11:30.495368 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495374 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495525 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c9bf89-c898-469c-8a83-e1b945b234a6" containerName="mariadb-account-create-update" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.495539 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="de68a814-1b9a-4aad-9841-790f24b79e9e" containerName="dnsmasq-dns" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.496080 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.501018 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.503884 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.579937 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.579983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.681772 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.682082 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.682880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.717092 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"root-account-create-update-gpk5l\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.845417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:30 crc kubenswrapper[5136]: I0320 07:11:30.982551 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087520 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087749 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087838 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.087874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") pod \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\" (UID: \"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60\") " Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.088443 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.089351 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.103746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk" (OuterVolumeSpecName: "kube-api-access-xtknk") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "kube-api-access-xtknk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.108208 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.110267 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.113671 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.114251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts" (OuterVolumeSpecName: "scripts") pod "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" (UID: "abe527e6-fcfe-4955-a1c4-b2b63f1e3c60"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189704 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtknk\" (UniqueName: \"kubernetes.io/projected/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-kube-api-access-xtknk\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189747 5136 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189757 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189766 5136 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189774 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189782 5136 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.189790 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:31 crc kubenswrapper[5136]: W0320 07:11:31.288424 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01c3454a_7ea9_4c46_9fc5_1cec3a2d445b.slice/crio-c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f WatchSource:0}: Error finding container c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f: Status 404 returned error can't find the container with id c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.289024 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.630309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerStarted","Data":"c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f"} Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.632940 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-v7xvp" event={"ID":"abe527e6-fcfe-4955-a1c4-b2b63f1e3c60","Type":"ContainerDied","Data":"158b8904c559e5367b1f3b8f9dd4746bcb9987780df1531c47db70ae775f7d6f"} Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.632963 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="158b8904c559e5367b1f3b8f9dd4746bcb9987780df1531c47db70ae775f7d6f" Mar 20 07:11:31 crc kubenswrapper[5136]: I0320 07:11:31.632998 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-v7xvp" Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.643309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerStarted","Data":"27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3"} Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.814436 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.823937 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"swift-storage-0\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " pod="openstack/swift-storage-0" Mar 20 07:11:32 crc kubenswrapper[5136]: I0320 07:11:32.924079 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:11:33 crc kubenswrapper[5136]: I0320 07:11:33.652228 5136 generic.go:334] "Generic (PLEG): container finished" podID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerID="27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3" exitCode=0 Mar 20 07:11:33 crc kubenswrapper[5136]: I0320 07:11:33.652271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerDied","Data":"27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3"} Mar 20 07:11:34 crc kubenswrapper[5136]: I0320 07:11:34.815482 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output=< Mar 20 07:11:34 crc kubenswrapper[5136]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 07:11:34 crc kubenswrapper[5136]: > Mar 20 07:11:34 crc kubenswrapper[5136]: I0320 07:11:34.827257 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:34 crc kubenswrapper[5136]: I0320 07:11:34.828038 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.062321 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:35 crc kubenswrapper[5136]: E0320 07:11:35.063046 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerName="swift-ring-rebalance" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.063062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerName="swift-ring-rebalance" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.063277 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" containerName="swift-ring-rebalance" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.064190 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.066627 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.075982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151380 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151566 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.151586 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253450 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253472 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253837 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.253839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.254618 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.255471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.271023 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"ovn-controller-gnwt6-config-zcpwv\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:35 crc kubenswrapper[5136]: I0320 07:11:35.402143 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:39 crc kubenswrapper[5136]: I0320 07:11:39.742078 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 07:11:39 crc kubenswrapper[5136]: I0320 07:11:39.819502 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output=< Mar 20 07:11:39 crc kubenswrapper[5136]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Mar 20 07:11:39 crc kubenswrapper[5136]: > Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.036621 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.038263 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.050486 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.137047 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.137186 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.170076 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.171477 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.178145 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.201104 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238762 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238852 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238928 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.238967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.240009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.276163 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"cinder-db-create-qrg9s\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.319986 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.340936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.341085 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.341971 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.351414 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.352590 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.354721 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.372004 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.373340 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.391197 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.397343 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"cinder-fdc6-account-create-update-sfc2q\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.434870 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444252 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444296 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444373 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.444537 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.458095 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.507873 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.508976 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.509329 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.545242 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550398 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550436 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550472 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.550501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.551669 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.551873 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.559889 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.572038 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.576597 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.578527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"barbican-db-create-7vvbn\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.589575 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.593220 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.593434 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.599112 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.602493 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"barbican-5429-account-create-update-kc9f7\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.652642 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.652727 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.653801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.664634 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.666091 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.668011 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.675185 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.676630 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"neutron-db-create-4vtvh\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.687952 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.705490 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.753529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.753589 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.753726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.834125 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855259 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855317 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855351 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855728 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.855797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.858983 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.859115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.877459 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"keystone-db-sync-vs5ks\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.957778 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.957869 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.958492 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:40 crc kubenswrapper[5136]: I0320 07:11:40.972579 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"neutron-bc06-account-create-update-lm56h\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:41 crc kubenswrapper[5136]: I0320 07:11:41.020927 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:41 crc kubenswrapper[5136]: I0320 07:11:41.035769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.030596 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.031275 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n4cjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-ldzkm_openstack(a8c6efdb-3b8c-4123-bfb6-a67cd416fb18): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.033414 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-ldzkm" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.114875 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.280450 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") pod \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.280765 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") pod \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\" (UID: \"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b\") " Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.283120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" (UID: "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.293901 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn" (OuterVolumeSpecName: "kube-api-access-gh9wn") pod "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" (UID: "01c3454a-7ea9-4c46-9fc5-1cec3a2d445b"). InnerVolumeSpecName "kube-api-access-gh9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.383301 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh9wn\" (UniqueName: \"kubernetes.io/projected/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-kube-api-access-gh9wn\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.383333 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.542226 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.735736 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc06-account-create-update-lm56h" event={"ID":"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda","Type":"ContainerStarted","Data":"0fb532a79a9aa102a7a434267fea66c47410f2caf36ee8ef0fa620b6493b6b37"} Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.738092 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-gpk5l" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.738541 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-gpk5l" event={"ID":"01c3454a-7ea9-4c46-9fc5-1cec3a2d445b","Type":"ContainerDied","Data":"c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f"} Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.738592 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4b51ad16741164e53f30bf9875833941ddae2a2b3716b28843403d7d414550f" Mar 20 07:11:42 crc kubenswrapper[5136]: E0320 07:11:42.739396 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-glance-api@sha256:dae5e39780d5a15eed030c7009f8e5317139d447558ac83f038497be594be120\\\"\"" pod="openstack/glance-db-sync-ldzkm" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.825663 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.836905 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:11:42 crc kubenswrapper[5136]: I0320 07:11:42.958775 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:11:42 crc kubenswrapper[5136]: W0320 07:11:42.963474 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd944fb6_1517_4f5b_b579_79d8f1f3da19.slice/crio-d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742 WatchSource:0}: Error finding container d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742: Status 404 returned error can't find the container with id d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.005477 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.019612 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:11:43 crc kubenswrapper[5136]: W0320 07:11:43.027069 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f4b546d_a206_4e15_b21b_850ef44aac79.slice/crio-377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d WatchSource:0}: Error finding container 377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d: Status 404 returned error can't find the container with id 377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.038755 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.048784 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.064406 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.748276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerStarted","Data":"30980e34c4254b1c4f948141d08320685947985bfdf2f9e08996624f149427ee"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.750742 5136 generic.go:334] "Generic (PLEG): container finished" podID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerID="5e947f339491ac05ba12abc9cb95630dcf48840148917141c549dbda5ca4a25f" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.750806 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7vvbn" event={"ID":"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5","Type":"ContainerDied","Data":"5e947f339491ac05ba12abc9cb95630dcf48840148917141c549dbda5ca4a25f"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.750887 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7vvbn" event={"ID":"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5","Type":"ContainerStarted","Data":"84bc3a1e112c37cd1f67e75a52abb8a000d51e43387e883009d8819dae89b9de"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.756433 5136 generic.go:334] "Generic (PLEG): container finished" podID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerID="0ed02eb432d6f42e0d9bf84365b12025d2b0ecfccb688b075f04ab7b6e93a89d" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.756524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc06-account-create-update-lm56h" event={"ID":"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda","Type":"ContainerDied","Data":"0ed02eb432d6f42e0d9bf84365b12025d2b0ecfccb688b075f04ab7b6e93a89d"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.758235 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerID="32a4b8b42d71b772e9ef90a830d8bb2691b008e79e6ac5eedc1a261ab6fb23b2" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.758299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fdc6-account-create-update-sfc2q" event={"ID":"ccfe42cb-9794-449c-8ad8-54d68bf21607","Type":"ContainerDied","Data":"32a4b8b42d71b772e9ef90a830d8bb2691b008e79e6ac5eedc1a261ab6fb23b2"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.758323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fdc6-account-create-update-sfc2q" event={"ID":"ccfe42cb-9794-449c-8ad8-54d68bf21607","Type":"ContainerStarted","Data":"e8a7f3cb69a5975bc30fc2f03d890a9002e156e4337032cea99e0a3317da2e4f"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.784181 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.789181 5136 generic.go:334] "Generic (PLEG): container finished" podID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerID="52c9595f9d03cfa1e4df7232d34e2bf01954bbb2d3d7f55b6c4baddaa2f4853a" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.789569 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4vtvh" event={"ID":"52bcca3a-bd10-425e-bc7f-f78c8c4a0271","Type":"ContainerDied","Data":"52c9595f9d03cfa1e4df7232d34e2bf01954bbb2d3d7f55b6c4baddaa2f4853a"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.789669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4vtvh" event={"ID":"52bcca3a-bd10-425e-bc7f-f78c8c4a0271","Type":"ContainerStarted","Data":"d9fac24447701126e9a237d1bf5d69bcfb5c81fad7661f19664ae4562c54f208"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.796021 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerStarted","Data":"152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.796075 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerStarted","Data":"7c314f2f80dc7cc4e88b76ae835a24e87396cc9afa36bc502acc09f41465ff1c"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.799148 5136 generic.go:334] "Generic (PLEG): container finished" podID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerID="47ae9136918142f0659195583b1d45f1b8d098ff54fd4db577e632c9d504d4ec" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.799218 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-kc9f7" event={"ID":"ec1091b0-0c0e-40a9-9131-93d8e912d0af","Type":"ContainerDied","Data":"47ae9136918142f0659195583b1d45f1b8d098ff54fd4db577e632c9d504d4ec"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.799250 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-kc9f7" event={"ID":"ec1091b0-0c0e-40a9-9131-93d8e912d0af","Type":"ContainerStarted","Data":"1d74ef03c10f467ffa66ef9e94663926540a5700594d348d3814bd28d77786fa"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.801767 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerID="685537caeff80758998e736f40d87da6358ae395ce8425cb44887ce77751a0c9" exitCode=0 Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.801858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qrg9s" event={"ID":"4f4b546d-a206-4e15-b21b-850ef44aac79","Type":"ContainerDied","Data":"685537caeff80758998e736f40d87da6358ae395ce8425cb44887ce77751a0c9"} Mar 20 07:11:43 crc kubenswrapper[5136]: I0320 07:11:43.801894 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qrg9s" event={"ID":"4f4b546d-a206-4e15-b21b-850ef44aac79","Type":"ContainerStarted","Data":"377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.807665 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-gnwt6" Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.815648 5136 generic.go:334] "Generic (PLEG): container finished" podID="8e30801a-f333-4f24-b301-4e03b644b07b" containerID="152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98" exitCode=0 Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.815709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerDied","Data":"152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.818605 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.818636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa"} Mar 20 07:11:44 crc kubenswrapper[5136]: I0320 07:11:44.818649 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.165003 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.351379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") pod \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.351466 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") pod \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\" (UID: \"ec1091b0-0c0e-40a9-9131-93d8e912d0af\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.360008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc" (OuterVolumeSpecName: "kube-api-access-w9wbc") pod "ec1091b0-0c0e-40a9-9131-93d8e912d0af" (UID: "ec1091b0-0c0e-40a9-9131-93d8e912d0af"). InnerVolumeSpecName "kube-api-access-w9wbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.365193 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec1091b0-0c0e-40a9-9131-93d8e912d0af" (UID: "ec1091b0-0c0e-40a9-9131-93d8e912d0af"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.453710 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec1091b0-0c0e-40a9-9131-93d8e912d0af-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.453748 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9wbc\" (UniqueName: \"kubernetes.io/projected/ec1091b0-0c0e-40a9-9131-93d8e912d0af-kube-api-access-w9wbc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.492997 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.493345 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.499583 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.511682 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.516769 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657391 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") pod \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.657581 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") pod \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658116 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658133 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52bcca3a-bd10-425e-bc7f-f78c8c4a0271" (UID: "52bcca3a-bd10-425e-bc7f-f78c8c4a0271"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658239 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") pod \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658322 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658374 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") pod \"4f4b546d-a206-4e15-b21b-850ef44aac79\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") pod \"8e30801a-f333-4f24-b301-4e03b644b07b\" (UID: \"8e30801a-f333-4f24-b301-4e03b644b07b\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658427 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") pod \"4f4b546d-a206-4e15-b21b-850ef44aac79\" (UID: \"4f4b546d-a206-4e15-b21b-850ef44aac79\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658460 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") pod \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\" (UID: \"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658488 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") pod \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\" (UID: \"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658527 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") pod \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\" (UID: \"52bcca3a-bd10-425e-bc7f-f78c8c4a0271\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.658685 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run" (OuterVolumeSpecName: "var-run") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659175 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" (UID: "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659463 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4f4b546d-a206-4e15-b21b-850ef44aac79" (UID: "4f4b546d-a206-4e15-b21b-850ef44aac79"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659484 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659516 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" (UID: "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.659914 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts" (OuterVolumeSpecName: "scripts") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660416 5136 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660438 5136 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-additional-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660449 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660458 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8e30801a-f333-4f24-b301-4e03b644b07b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660466 5136 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660473 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4f4b546d-a206-4e15-b21b-850ef44aac79-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660481 5136 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/8e30801a-f333-4f24-b301-4e03b644b07b-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660489 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.660497 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.662021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq" (OuterVolumeSpecName: "kube-api-access-pdwtq") pod "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" (UID: "d2b269d7-6c83-46fd-b85c-5d9dba5ccbda"). InnerVolumeSpecName "kube-api-access-pdwtq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.662387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486" (OuterVolumeSpecName: "kube-api-access-wc486") pod "4f4b546d-a206-4e15-b21b-850ef44aac79" (UID: "4f4b546d-a206-4e15-b21b-850ef44aac79"). InnerVolumeSpecName "kube-api-access-wc486". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.662914 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8" (OuterVolumeSpecName: "kube-api-access-2lvc8") pod "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" (UID: "81ba128a-ff3d-42a9-aa76-04e60b3a2cb5"). InnerVolumeSpecName "kube-api-access-2lvc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.663701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv" (OuterVolumeSpecName: "kube-api-access-txpkv") pod "8e30801a-f333-4f24-b301-4e03b644b07b" (UID: "8e30801a-f333-4f24-b301-4e03b644b07b"). InnerVolumeSpecName "kube-api-access-txpkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.663967 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk" (OuterVolumeSpecName: "kube-api-access-v4svk") pod "52bcca3a-bd10-425e-bc7f-f78c8c4a0271" (UID: "52bcca3a-bd10-425e-bc7f-f78c8c4a0271"). InnerVolumeSpecName "kube-api-access-v4svk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.761543 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") pod \"ccfe42cb-9794-449c-8ad8-54d68bf21607\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.761916 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") pod \"ccfe42cb-9794-449c-8ad8-54d68bf21607\" (UID: \"ccfe42cb-9794-449c-8ad8-54d68bf21607\") " Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.761988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ccfe42cb-9794-449c-8ad8-54d68bf21607" (UID: "ccfe42cb-9794-449c-8ad8-54d68bf21607"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.762931 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ccfe42cb-9794-449c-8ad8-54d68bf21607-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763038 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txpkv\" (UniqueName: \"kubernetes.io/projected/8e30801a-f333-4f24-b301-4e03b644b07b-kube-api-access-txpkv\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763115 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lvc8\" (UniqueName: \"kubernetes.io/projected/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5-kube-api-access-2lvc8\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763201 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wc486\" (UniqueName: \"kubernetes.io/projected/4f4b546d-a206-4e15-b21b-850ef44aac79-kube-api-access-wc486\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763306 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4svk\" (UniqueName: \"kubernetes.io/projected/52bcca3a-bd10-425e-bc7f-f78c8c4a0271-kube-api-access-v4svk\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.763421 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdwtq\" (UniqueName: \"kubernetes.io/projected/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda-kube-api-access-pdwtq\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.764472 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4" (OuterVolumeSpecName: "kube-api-access-cgvc4") pod "ccfe42cb-9794-449c-8ad8-54d68bf21607" (UID: "ccfe42cb-9794-449c-8ad8-54d68bf21607"). InnerVolumeSpecName "kube-api-access-cgvc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.825251 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.825300 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.835166 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-bc06-account-create-update-lm56h" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.835168 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-bc06-account-create-update-lm56h" event={"ID":"d2b269d7-6c83-46fd-b85c-5d9dba5ccbda","Type":"ContainerDied","Data":"0fb532a79a9aa102a7a434267fea66c47410f2caf36ee8ef0fa620b6493b6b37"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.835303 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fb532a79a9aa102a7a434267fea66c47410f2caf36ee8ef0fa620b6493b6b37" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.837404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-fdc6-account-create-update-sfc2q" event={"ID":"ccfe42cb-9794-449c-8ad8-54d68bf21607","Type":"ContainerDied","Data":"e8a7f3cb69a5975bc30fc2f03d890a9002e156e4337032cea99e0a3317da2e4f"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.837423 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8a7f3cb69a5975bc30fc2f03d890a9002e156e4337032cea99e0a3317da2e4f" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.837463 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-fdc6-account-create-update-sfc2q" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.838661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-7vvbn" event={"ID":"81ba128a-ff3d-42a9-aa76-04e60b3a2cb5","Type":"ContainerDied","Data":"84bc3a1e112c37cd1f67e75a52abb8a000d51e43387e883009d8819dae89b9de"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.838678 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84bc3a1e112c37cd1f67e75a52abb8a000d51e43387e883009d8819dae89b9de" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.838748 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-7vvbn" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.840489 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-4vtvh" event={"ID":"52bcca3a-bd10-425e-bc7f-f78c8c4a0271","Type":"ContainerDied","Data":"d9fac24447701126e9a237d1bf5d69bcfb5c81fad7661f19664ae4562c54f208"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.840505 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d9fac24447701126e9a237d1bf5d69bcfb5c81fad7661f19664ae4562c54f208" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.840520 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-4vtvh" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.841936 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6-config-zcpwv" event={"ID":"8e30801a-f333-4f24-b301-4e03b644b07b","Type":"ContainerDied","Data":"7c314f2f80dc7cc4e88b76ae835a24e87396cc9afa36bc502acc09f41465ff1c"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.841955 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c314f2f80dc7cc4e88b76ae835a24e87396cc9afa36bc502acc09f41465ff1c" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.842043 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6-config-zcpwv" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.850419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-kc9f7" event={"ID":"ec1091b0-0c0e-40a9-9131-93d8e912d0af","Type":"ContainerDied","Data":"1d74ef03c10f467ffa66ef9e94663926540a5700594d348d3814bd28d77786fa"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.850456 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-kc9f7" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.850467 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d74ef03c10f467ffa66ef9e94663926540a5700594d348d3814bd28d77786fa" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.851488 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qrg9s" event={"ID":"4f4b546d-a206-4e15-b21b-850ef44aac79","Type":"ContainerDied","Data":"377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.851514 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="377ab0d8abc56dfea3857798a60589d54b8af4b0a3e0ecfc809ded51a854707d" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.851592 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qrg9s" Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.854528 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346"} Mar 20 07:11:45 crc kubenswrapper[5136]: I0320 07:11:45.864717 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgvc4\" (UniqueName: \"kubernetes.io/projected/ccfe42cb-9794-449c-8ad8-54d68bf21607-kube-api-access-cgvc4\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.625067 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.632923 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gnwt6-config-zcpwv"] Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.969329 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:46 crc kubenswrapper[5136]: I0320 07:11:46.979206 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-gpk5l"] Mar 20 07:11:48 crc kubenswrapper[5136]: I0320 07:11:48.408386 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" path="/var/lib/kubelet/pods/01c3454a-7ea9-4c46-9fc5-1cec3a2d445b/volumes" Mar 20 07:11:48 crc kubenswrapper[5136]: I0320 07:11:48.409332 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" path="/var/lib/kubelet/pods/8e30801a-f333-4f24-b301-4e03b644b07b/volumes" Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.916628 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.917007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.917024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.917037 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.919709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerStarted","Data":"ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be"} Mar 20 07:11:50 crc kubenswrapper[5136]: I0320 07:11:50.944539 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-vs5ks" podStartSLOduration=4.032194541 podStartE2EDuration="10.944518433s" podCreationTimestamp="2026-03-20 07:11:40 +0000 UTC" firstStartedPulling="2026-03-20 07:11:43.044420769 +0000 UTC m=+1335.303731920" lastFinishedPulling="2026-03-20 07:11:49.956744661 +0000 UTC m=+1342.216055812" observedRunningTime="2026-03-20 07:11:50.935225651 +0000 UTC m=+1343.194536802" watchObservedRunningTime="2026-03-20 07:11:50.944518433 +0000 UTC m=+1343.203829604" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.032854 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033512 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033528 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033547 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033555 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033572 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" containerName="ovn-config" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033579 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" containerName="ovn-config" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033591 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033598 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033610 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033617 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033632 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033640 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033663 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033670 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: E0320 07:11:52.033683 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033690 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033879 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033895 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e30801a-f333-4f24-b301-4e03b644b07b" containerName="ovn-config" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033910 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033920 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033931 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033942 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033957 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" containerName="mariadb-database-create" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.033967 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="01c3454a-7ea9-4c46-9fc5-1cec3a2d445b" containerName="mariadb-account-create-update" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.034563 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.040138 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.048595 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.171584 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.171653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.272950 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.273016 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.273938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.292550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"root-account-create-update-b5fwk\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:52 crc kubenswrapper[5136]: I0320 07:11:52.368333 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.088162 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:11:53 crc kubenswrapper[5136]: W0320 07:11:53.097664 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc44ee109_b721_41c2_bc45_8c6097d31402.slice/crio-893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d WatchSource:0}: Error finding container 893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d: Status 404 returned error can't find the container with id 893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.949887 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerID="ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be" exitCode=0 Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.950100 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerDied","Data":"ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.951915 5136 generic.go:334] "Generic (PLEG): container finished" podID="c44ee109-b721-41c2-bc45-8c6097d31402" containerID="80afd4ebec7d57a2a5f4e5804fe0cafa6290530e8266af5fe943abb82f8b0a3e" exitCode=0 Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.951977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fwk" event={"ID":"c44ee109-b721-41c2-bc45-8c6097d31402","Type":"ContainerDied","Data":"80afd4ebec7d57a2a5f4e5804fe0cafa6290530e8266af5fe943abb82f8b0a3e"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.952000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fwk" event={"ID":"c44ee109-b721-41c2-bc45-8c6097d31402","Type":"ContainerStarted","Data":"893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963505 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963577 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786"} Mar 20 07:11:53 crc kubenswrapper[5136]: I0320 07:11:53.963592 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43"} Mar 20 07:11:54 crc kubenswrapper[5136]: I0320 07:11:54.992135 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerStarted","Data":"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a"} Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.309522 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=30.522408749 podStartE2EDuration="40.309506201s" podCreationTimestamp="2026-03-20 07:11:15 +0000 UTC" firstStartedPulling="2026-03-20 07:11:42.969291889 +0000 UTC m=+1335.228603040" lastFinishedPulling="2026-03-20 07:11:52.756389341 +0000 UTC m=+1345.015700492" observedRunningTime="2026-03-20 07:11:55.046883252 +0000 UTC m=+1347.306194423" watchObservedRunningTime="2026-03-20 07:11:55.309506201 +0000 UTC m=+1347.568817352" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.318317 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.319627 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.324188 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.350688 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.367248 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.383586 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433505 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433662 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433679 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.433801 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") pod \"c7e7cfea-b971-447e-a166-20b4827ce7dc\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535831 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") pod \"c44ee109-b721-41c2-bc45-8c6097d31402\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") pod \"c44ee109-b721-41c2-bc45-8c6097d31402\" (UID: \"c44ee109-b721-41c2-bc45-8c6097d31402\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535940 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") pod \"c7e7cfea-b971-447e-a166-20b4827ce7dc\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.535959 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") pod \"c7e7cfea-b971-447e-a166-20b4827ce7dc\" (UID: \"c7e7cfea-b971-447e-a166-20b4827ce7dc\") " Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536157 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536273 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536306 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.536335 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.537264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.538689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c44ee109-b721-41c2-bc45-8c6097d31402" (UID: "c44ee109-b721-41c2-bc45-8c6097d31402"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.544503 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh" (OuterVolumeSpecName: "kube-api-access-b9bjh") pod "c44ee109-b721-41c2-bc45-8c6097d31402" (UID: "c44ee109-b721-41c2-bc45-8c6097d31402"). InnerVolumeSpecName "kube-api-access-b9bjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.544973 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.545515 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.545806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.546346 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.563022 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2" (OuterVolumeSpecName: "kube-api-access-mh2l2") pod "c7e7cfea-b971-447e-a166-20b4827ce7dc" (UID: "c7e7cfea-b971-447e-a166-20b4827ce7dc"). InnerVolumeSpecName "kube-api-access-mh2l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.582960 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7e7cfea-b971-447e-a166-20b4827ce7dc" (UID: "c7e7cfea-b971-447e-a166-20b4827ce7dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.584996 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"dnsmasq-dns-cb65b4b5-wv44d\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.625002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data" (OuterVolumeSpecName: "config-data") pod "c7e7cfea-b971-447e-a166-20b4827ce7dc" (UID: "c7e7cfea-b971-447e-a166-20b4827ce7dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.637844 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b9bjh\" (UniqueName: \"kubernetes.io/projected/c44ee109-b721-41c2-bc45-8c6097d31402-kube-api-access-b9bjh\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638015 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh2l2\" (UniqueName: \"kubernetes.io/projected/c7e7cfea-b971-447e-a166-20b4827ce7dc-kube-api-access-mh2l2\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638070 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638117 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7e7cfea-b971-447e-a166-20b4827ce7dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.638195 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c44ee109-b721-41c2-bc45-8c6097d31402-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:55 crc kubenswrapper[5136]: I0320 07:11:55.684210 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.011605 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-vs5ks" event={"ID":"c7e7cfea-b971-447e-a166-20b4827ce7dc","Type":"ContainerDied","Data":"30980e34c4254b1c4f948141d08320685947985bfdf2f9e08996624f149427ee"} Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.011908 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30980e34c4254b1c4f948141d08320685947985bfdf2f9e08996624f149427ee" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.011618 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-vs5ks" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.013451 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-b5fwk" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.014924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-b5fwk" event={"ID":"c44ee109-b721-41c2-bc45-8c6097d31402","Type":"ContainerDied","Data":"893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d"} Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.014955 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="893ce5f8e661407b1b1e15c6feb88baeefabeadae73be42bde85ab395691ed3d" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.144124 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.158484 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.202782 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:11:56 crc kubenswrapper[5136]: E0320 07:11:56.203572 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" containerName="mariadb-account-create-update" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.203637 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" containerName="mariadb-account-create-update" Mar 20 07:11:56 crc kubenswrapper[5136]: E0320 07:11:56.203707 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerName="keystone-db-sync" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.203771 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerName="keystone-db-sync" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.203979 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" containerName="mariadb-account-create-update" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.204047 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" containerName="keystone-db-sync" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.204594 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.208214 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.208547 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.208782 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.209015 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.209143 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.216065 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.217335 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.223643 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.240541 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355797 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355867 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355891 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355908 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355975 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.355989 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356029 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356062 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.356116 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.458499 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.459552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461248 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461430 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461451 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461470 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461495 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461514 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461531 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461606 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461645 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.461681 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.463175 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.463903 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.464627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.464913 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.471850 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ps866" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.472320 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.472652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.473524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.480756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.489531 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.489784 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.489985 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.516606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.523209 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.538078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.538715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"keystone-bootstrap-fvfgp\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.542793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"dnsmasq-dns-5ff8446d97-wzbhw\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.544753 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.545230 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-scqxf" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.547014 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.573320 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.573702 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.573764 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.575777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.576796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.577124 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.577251 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.587173 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.588846 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.597285 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.597865 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4t29g" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.598087 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.617151 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.675934 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688242 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688326 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688458 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688489 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688524 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688589 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688654 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688674 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.688739 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.692355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.698194 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.698942 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.699156 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.707088 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.708424 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.712511 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.712770 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tz5pc" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.720511 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.724588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"cinder-db-sync-llt2h\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.727608 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.739004 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.766095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.768188 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.768323 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.778172 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.780100 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.781692 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.782033 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790246 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790297 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790352 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790370 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790391 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790492 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790520 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.790567 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.794033 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.799597 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.801168 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.803073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.807600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.807685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.808086 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.836608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"placement-db-sync-n6cqg\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.837029 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"neutron-db-sync-kxk7p\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893635 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893657 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893671 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893686 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893749 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893782 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893798 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.893855 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894203 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894253 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894295 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894311 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.894357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.905440 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.911741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.920201 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.925715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"barbican-db-sync-jv7f9\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.935676 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.950207 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996519 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996585 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996648 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996674 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996691 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996707 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996721 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996739 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996769 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996801 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:56 crc kubenswrapper[5136]: I0320 07:11:56.996872 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.001017 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.002297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.002905 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.004295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.004834 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.006210 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.006951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.007334 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.008009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.011392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.021256 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.027965 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"dnsmasq-dns-7ff6d84665-6bvps\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.028391 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ceilometer-0\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.051569 5136 generic.go:334] "Generic (PLEG): container finished" podID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerID="bf2834c66b5c522715063b4ba5f30618173a273c2bbf5480ab6f45f292898e0c" exitCode=0 Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.051669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" event={"ID":"d3270013-a1f2-43bd-8f40-38b10b4253a1","Type":"ContainerDied","Data":"bf2834c66b5c522715063b4ba5f30618173a273c2bbf5480ab6f45f292898e0c"} Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.051696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" event={"ID":"d3270013-a1f2-43bd-8f40-38b10b4253a1","Type":"ContainerStarted","Data":"446c6c3ae62c18c1fedd61d3a75b7402468673bf89bbfafebbd571740ddaa669"} Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.056347 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.063896 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerStarted","Data":"dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3"} Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.106475 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ldzkm" podStartSLOduration=2.216441305 podStartE2EDuration="33.106455596s" podCreationTimestamp="2026-03-20 07:11:24 +0000 UTC" firstStartedPulling="2026-03-20 07:11:24.981163222 +0000 UTC m=+1317.240474373" lastFinishedPulling="2026-03-20 07:11:55.871177513 +0000 UTC m=+1348.130488664" observedRunningTime="2026-03-20 07:11:57.097664979 +0000 UTC m=+1349.356976130" watchObservedRunningTime="2026-03-20 07:11:57.106455596 +0000 UTC m=+1349.365766747" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.154304 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.165309 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.319203 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.501517 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.706893 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.724628 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.735867 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.760148 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834547 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834695 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834757 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834845 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834886 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.834900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") pod \"d3270013-a1f2-43bd-8f40-38b10b4253a1\" (UID: \"d3270013-a1f2-43bd-8f40-38b10b4253a1\") " Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.842344 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26" (OuterVolumeSpecName: "kube-api-access-2ch26") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "kube-api-access-2ch26". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.868201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.872361 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.926854 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937009 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2ch26\" (UniqueName: \"kubernetes.io/projected/d3270013-a1f2-43bd-8f40-38b10b4253a1-kube-api-access-2ch26\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937043 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937053 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.937060 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.961317 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:57 crc kubenswrapper[5136]: I0320 07:11:57.980553 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config" (OuterVolumeSpecName: "config") pod "d3270013-a1f2-43bd-8f40-38b10b4253a1" (UID: "d3270013-a1f2-43bd-8f40-38b10b4253a1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.007628 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.043751 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.043787 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d3270013-a1f2-43bd-8f40-38b10b4253a1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.089511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerStarted","Data":"22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.089558 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerStarted","Data":"9c68bfb878007ca6b62fba813ddcba80cdee73ef5b85374234305710364f4b28"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.106842 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerStarted","Data":"624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.106895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerStarted","Data":"61d7579d5cf960da949624b7fe0fdc10cd43078fd38353bfd5da73acb2c3a781"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.120643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerStarted","Data":"b5e8ad0f3ddd8fc1ff41409f21a655282a69b3d531e79a60604f206a303c07a7"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.122362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"b70abbe701b5afa37deb9280d8bab4f32e4ab209764879ee00b0808064143809"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.123785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerStarted","Data":"5200ac7ba7db438f0d107516ee128664a59b5fde08b29a5ce98e40e534824c47"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.125871 5136 generic.go:334] "Generic (PLEG): container finished" podID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerID="e4b8e8eedb7b9d25ee4c3ed5740071573702f9421fc7bc697e1b313ba902496c" exitCode=0 Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.125960 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" event={"ID":"ceda50a9-97c2-4310-b7ab-444024c33a87","Type":"ContainerDied","Data":"e4b8e8eedb7b9d25ee4c3ed5740071573702f9421fc7bc697e1b313ba902496c"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.125992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" event={"ID":"ceda50a9-97c2-4310-b7ab-444024c33a87","Type":"ContainerStarted","Data":"46bdbf8830a8054f8b2b9f249b53f0ef2256c1a4057d143ebe31897d3c9eecc6"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.134785 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-kxk7p" podStartSLOduration=2.134763168 podStartE2EDuration="2.134763168s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:11:58.113184253 +0000 UTC m=+1350.372495404" watchObservedRunningTime="2026-03-20 07:11:58.134763168 +0000 UTC m=+1350.394074319" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.141462 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fvfgp" podStartSLOduration=2.14144124 podStartE2EDuration="2.14144124s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:11:58.128830558 +0000 UTC m=+1350.388141699" watchObservedRunningTime="2026-03-20 07:11:58.14144124 +0000 UTC m=+1350.400752381" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.146741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" event={"ID":"d3270013-a1f2-43bd-8f40-38b10b4253a1","Type":"ContainerDied","Data":"446c6c3ae62c18c1fedd61d3a75b7402468673bf89bbfafebbd571740ddaa669"} Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.146795 5136 scope.go:117] "RemoveContainer" containerID="bf2834c66b5c522715063b4ba5f30618173a273c2bbf5480ab6f45f292898e0c" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.146987 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb65b4b5-wv44d" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.196739 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.247878 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.294224 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb65b4b5-wv44d"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.344242 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.466274 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" path="/var/lib/kubelet/pods/d3270013-a1f2-43bd-8f40-38b10b4253a1/volumes" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.655073 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.661158 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.763599 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.763660 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.763742 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.764936 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.765009 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.765057 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") pod \"ceda50a9-97c2-4310-b7ab-444024c33a87\" (UID: \"ceda50a9-97c2-4310-b7ab-444024c33a87\") " Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.772914 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46" (OuterVolumeSpecName: "kube-api-access-z6k46") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "kube-api-access-z6k46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.797547 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config" (OuterVolumeSpecName: "config") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.799509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.799646 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.800596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.809770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ceda50a9-97c2-4310-b7ab-444024c33a87" (UID: "ceda50a9-97c2-4310-b7ab-444024c33a87"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867193 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867226 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867236 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6k46\" (UniqueName: \"kubernetes.io/projected/ceda50a9-97c2-4310-b7ab-444024c33a87-kube-api-access-z6k46\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867246 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867255 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:58 crc kubenswrapper[5136]: I0320 07:11:58.867264 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceda50a9-97c2-4310-b7ab-444024c33a87-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.158034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerStarted","Data":"a6c0c7cbe316e747e497558676af55967b6ed940767c0667d59d4da80f64920a"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.161266 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.161264 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ff8446d97-wzbhw" event={"ID":"ceda50a9-97c2-4310-b7ab-444024c33a87","Type":"ContainerDied","Data":"46bdbf8830a8054f8b2b9f249b53f0ef2256c1a4057d143ebe31897d3c9eecc6"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.161308 5136 scope.go:117] "RemoveContainer" containerID="e4b8e8eedb7b9d25ee4c3ed5740071573702f9421fc7bc697e1b313ba902496c" Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.168544 5136 generic.go:334] "Generic (PLEG): container finished" podID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerID="d619f45ef012eb3e909a4c91d853e16f0aac41aa3f7c34e99d85f79a8050ee1a" exitCode=0 Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.168589 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerDied","Data":"d619f45ef012eb3e909a4c91d853e16f0aac41aa3f7c34e99d85f79a8050ee1a"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.168632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerStarted","Data":"b87a47428636e1da6de88425ef519d1313ec7d6e857d9dadf3b1f683fef7c84b"} Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.252576 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:11:59 crc kubenswrapper[5136]: I0320 07:11:59.265229 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ff8446d97-wzbhw"] Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.143726 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:12:00 crc kubenswrapper[5136]: E0320 07:12:00.144390 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144407 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: E0320 07:12:00.144418 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144425 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144602 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.144624 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3270013-a1f2-43bd-8f40-38b10b4253a1" containerName="init" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.145162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.158617 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.158691 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.158762 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.198320 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.201306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerStarted","Data":"60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4"} Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.202883 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.227919 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" podStartSLOduration=4.22788727 podStartE2EDuration="4.22788727s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:00.223052133 +0000 UTC m=+1352.482363304" watchObservedRunningTime="2026-03-20 07:12:00.22788727 +0000 UTC m=+1352.487198421" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.298205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"auto-csr-approver-29566512-lrvjf\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.400046 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"auto-csr-approver-29566512-lrvjf\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.430135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"auto-csr-approver-29566512-lrvjf\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.440424 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceda50a9-97c2-4310-b7ab-444024c33a87" path="/var/lib/kubelet/pods/ceda50a9-97c2-4310-b7ab-444024c33a87/volumes" Mar 20 07:12:00 crc kubenswrapper[5136]: I0320 07:12:00.469927 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:02 crc kubenswrapper[5136]: I0320 07:12:02.225061 5136 generic.go:334] "Generic (PLEG): container finished" podID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerID="624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072" exitCode=0 Mar 20 07:12:02 crc kubenswrapper[5136]: I0320 07:12:02.225342 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerDied","Data":"624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072"} Mar 20 07:12:03 crc kubenswrapper[5136]: I0320 07:12:03.108282 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.156981 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.215722 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.216142 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" containerID="cri-o://604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b" gracePeriod=10 Mar 20 07:12:07 crc kubenswrapper[5136]: W0320 07:12:07.811517 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4ace6934_986e_463e_8e10_ea2d38d8657b.slice/crio-568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f WatchSource:0}: Error finding container 568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f: Status 404 returned error can't find the container with id 568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f Mar 20 07:12:07 crc kubenswrapper[5136]: I0320 07:12:07.928107 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044329 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.044417 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") pod \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\" (UID: \"eb6007d0-4c13-43d0-b1b9-9e452fa9357f\") " Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.051090 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5" (OuterVolumeSpecName: "kube-api-access-2jgz5") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "kube-api-access-2jgz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.064990 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.065037 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts" (OuterVolumeSpecName: "scripts") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.066280 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.075773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.090959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data" (OuterVolumeSpecName: "config-data") pod "eb6007d0-4c13-43d0-b1b9-9e452fa9357f" (UID: "eb6007d0-4c13-43d0-b1b9-9e452fa9357f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146190 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146226 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146238 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146247 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jgz5\" (UniqueName: \"kubernetes.io/projected/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-kube-api-access-2jgz5\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146255 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.146263 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/eb6007d0-4c13-43d0-b1b9-9e452fa9357f-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.289957 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fvfgp" event={"ID":"eb6007d0-4c13-43d0-b1b9-9e452fa9357f","Type":"ContainerDied","Data":"61d7579d5cf960da949624b7fe0fdc10cd43078fd38353bfd5da73acb2c3a781"} Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.289999 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d7579d5cf960da949624b7fe0fdc10cd43078fd38353bfd5da73acb2c3a781" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.290061 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fvfgp" Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.293766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" event={"ID":"4ace6934-986e-463e-8e10-ea2d38d8657b","Type":"ContainerStarted","Data":"568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f"} Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.296012 5136 generic.go:334] "Generic (PLEG): container finished" podID="d103abed-83b7-44e9-bc7f-786434426647" containerID="604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b" exitCode=0 Mar 20 07:12:08 crc kubenswrapper[5136]: I0320 07:12:08.296041 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerDied","Data":"604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b"} Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.015963 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.023281 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fvfgp"] Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.116184 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:12:09 crc kubenswrapper[5136]: E0320 07:12:09.116558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerName="keystone-bootstrap" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.116578 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerName="keystone-bootstrap" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.116746 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" containerName="keystone-bootstrap" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.117286 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.118880 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.120126 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.121102 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.121189 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.121286 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.128115 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161302 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161343 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161404 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161458 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.161480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262221 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262287 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262372 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262424 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.262447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.267856 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.268079 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.268603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.269550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.270173 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.286577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"keystone-bootstrap-xztql\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:09 crc kubenswrapper[5136]: I0320 07:12:09.469084 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:10 crc kubenswrapper[5136]: I0320 07:12:10.430467 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6007d0-4c13-43d0-b1b9-9e452fa9357f" path="/var/lib/kubelet/pods/eb6007d0-4c13-43d0-b1b9-9e452fa9357f/volumes" Mar 20 07:12:11 crc kubenswrapper[5136]: E0320 07:12:11.853997 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:1240a45aec9c3e1599be762c5565556560849b49fd39c7283b8e5519dcaa501a" Mar 20 07:12:11 crc kubenswrapper[5136]: E0320 07:12:11.854187 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:1240a45aec9c3e1599be762c5565556560849b49fd39c7283b8e5519dcaa501a,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6s9rd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jv7f9_openstack(16f28a76-f7a5-4980-a693-7bd078f3c128): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:12:11 crc kubenswrapper[5136]: E0320 07:12:11.855407 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jv7f9" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" Mar 20 07:12:12 crc kubenswrapper[5136]: E0320 07:12:12.327645 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api@sha256:1240a45aec9c3e1599be762c5565556560849b49fd39c7283b8e5519dcaa501a\\\"\"" pod="openstack/barbican-db-sync-jv7f9" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" Mar 20 07:12:15 crc kubenswrapper[5136]: E0320 07:12:15.621279 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8c6efdb_3b8c_4123_bfb6_a67cd416fb18.slice/crio-dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.821772 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.821851 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.821898 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.822519 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:12:15 crc kubenswrapper[5136]: I0320 07:12:15.822572 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f" gracePeriod=600 Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.097491 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.363082 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f" exitCode=0 Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.363159 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f"} Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.363468 5136 scope.go:117] "RemoveContainer" containerID="f8e515aa640e8c2897bc9d76b24ec080a3948c8f2224026c8645b6359dd2670f" Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.365001 5136 generic.go:334] "Generic (PLEG): container finished" podID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerID="dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3" exitCode=0 Mar 20 07:12:16 crc kubenswrapper[5136]: I0320 07:12:16.365023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerDied","Data":"dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3"} Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.653417 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.659962 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764843 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764957 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.764974 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765002 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765117 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765174 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") pod \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\" (UID: \"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.765192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") pod \"d103abed-83b7-44e9-bc7f-786434426647\" (UID: \"d103abed-83b7-44e9-bc7f-786434426647\") " Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.769005 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.769102 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g" (OuterVolumeSpecName: "kube-api-access-wch7g") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "kube-api-access-wch7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.786134 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp" (OuterVolumeSpecName: "kube-api-access-n4cjp") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "kube-api-access-n4cjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.808565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.816577 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config" (OuterVolumeSpecName: "config") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.821174 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.821259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.826845 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d103abed-83b7-44e9-bc7f-786434426647" (UID: "d103abed-83b7-44e9-bc7f-786434426647"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.836082 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data" (OuterVolumeSpecName: "config-data") pod "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" (UID: "a8c6efdb-3b8c-4123-bfb6-a67cd416fb18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866870 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866900 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866911 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wch7g\" (UniqueName: \"kubernetes.io/projected/d103abed-83b7-44e9-bc7f-786434426647-kube-api-access-wch7g\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866921 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866931 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866940 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866949 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866957 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4cjp\" (UniqueName: \"kubernetes.io/projected/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18-kube-api-access-n4cjp\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:20 crc kubenswrapper[5136]: I0320 07:12:20.866964 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d103abed-83b7-44e9-bc7f-786434426647-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.098116 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.121:5353: i/o timeout" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.423699 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" event={"ID":"d103abed-83b7-44e9-bc7f-786434426647","Type":"ContainerDied","Data":"5e9c9cbaa5f048deb6cd641b68b15e9585b27cf0df0654e28450ce6a42c152c0"} Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.423723 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b4ddd5fb7-cfjcg" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.425374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ldzkm" event={"ID":"a8c6efdb-3b8c-4123-bfb6-a67cd416fb18","Type":"ContainerDied","Data":"aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf"} Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.425416 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed70dbef306356adc19fcd32835aa191f676d0a2ff614fd22073d9b45e66eaf" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.425412 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ldzkm" Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.471324 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.477848 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b4ddd5fb7-cfjcg"] Mar 20 07:12:21 crc kubenswrapper[5136]: I0320 07:12:21.936646 5136 scope.go:117] "RemoveContainer" containerID="604b652a792660f1238e2607b4242155d6fa3281d34ce55590b668cd26222f1b" Mar 20 07:12:21 crc kubenswrapper[5136]: E0320 07:12:21.962065 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b" Mar 20 07:12:21 crc kubenswrapper[5136]: E0320 07:12:21.962276 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4skhx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-llt2h_openstack(2fc03366-82a1-4e30-a7e8-a06e16a8a14f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 07:12:21 crc kubenswrapper[5136]: E0320 07:12:21.963561 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-llt2h" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.089629 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.089967 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerName="glance-db-sync" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.089981 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerName="glance-db-sync" Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.089993 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.090000 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.090009 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="init" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.090015 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="init" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.102684 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d103abed-83b7-44e9-bc7f-786434426647" containerName="dnsmasq-dns" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.102724 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" containerName="glance-db-sync" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.115627 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.094332 5136 scope.go:117] "RemoveContainer" containerID="533f92371dee2235f15d0d84ab9f13da275c7e919c6618b46cdf3ab8345571a9" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.155075 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.293882 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294266 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294306 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294364 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294391 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.294416 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397008 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397077 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397180 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.397217 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.398090 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.398573 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.399087 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.399556 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.400241 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.423339 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"dnsmasq-dns-5d7fb48775-2qcl2\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.424317 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d103abed-83b7-44e9-bc7f-786434426647" path="/var/lib/kubelet/pods/d103abed-83b7-44e9-bc7f-786434426647/volumes" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.437187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerStarted","Data":"031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62"} Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.449760 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a"} Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.466200 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-n6cqg" podStartSLOduration=3.659201902 podStartE2EDuration="26.466148146s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:57.7406974 +0000 UTC m=+1350.000008551" lastFinishedPulling="2026-03-20 07:12:20.547643644 +0000 UTC m=+1372.806954795" observedRunningTime="2026-03-20 07:12:22.460483351 +0000 UTC m=+1374.719794492" watchObservedRunningTime="2026-03-20 07:12:22.466148146 +0000 UTC m=+1374.725459297" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.473275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6"} Mar 20 07:12:22 crc kubenswrapper[5136]: E0320 07:12:22.476761 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api@sha256:574a17f0877c175128a764f2b37fc02456649c8514689125718ce6ca974bfb6b\\\"\"" pod="openstack/cinder-db-sync-llt2h" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.494255 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.564665 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.993872 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.995864 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:22 crc kubenswrapper[5136]: I0320 07:12:22.999071 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-4q9lc" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.002061 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.002584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.007167 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:23 crc kubenswrapper[5136]: W0320 07:12:23.016539 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podec92c94f_350b_410f_af36_f232e43c51bc.slice/crio-23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b WatchSource:0}: Error finding container 23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b: Status 404 returned error can't find the container with id 23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.039628 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108277 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108886 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.108936 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.109004 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.109202 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.109246 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219705 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219756 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219824 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219845 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.219981 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.221365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.225146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.228108 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.231839 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.232704 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.240305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.254134 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.274347 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.275562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.284194 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.287499 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423340 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423399 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423566 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423605 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423716 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423780 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.423848 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.485545 5136 generic.go:334] "Generic (PLEG): container finished" podID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerID="be9df9297d087d9b583ba3c8a236fca6fd4fd729e25496c50522e980d7021c09" exitCode=0 Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.485621 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" event={"ID":"4ace6934-986e-463e-8e10-ea2d38d8657b","Type":"ContainerDied","Data":"be9df9297d087d9b583ba3c8a236fca6fd4fd729e25496c50522e980d7021c09"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.486528 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.496700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerStarted","Data":"517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.496745 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerStarted","Data":"a9fa96c65fa536fe3aa83743698eb9e3fddbb0c4cc2f21d54eee2d77fd4acf9d"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.500884 5136 generic.go:334] "Generic (PLEG): container finished" podID="ec92c94f-350b-410f-af36-f232e43c51bc" containerID="2ae57c8c056dcb5d29e8273f89d2edf49540ec43cbd77b2ca4a8816f0f65f160" exitCode=0 Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.501330 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerDied","Data":"2ae57c8c056dcb5d29e8273f89d2edf49540ec43cbd77b2ca4a8816f0f65f160"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.501355 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerStarted","Data":"23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b"} Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.524031 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-xztql" podStartSLOduration=14.524008697 podStartE2EDuration="14.524008697s" podCreationTimestamp="2026-03-20 07:12:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:23.513499039 +0000 UTC m=+1375.772810180" watchObservedRunningTime="2026-03-20 07:12:23.524008697 +0000 UTC m=+1375.783319848" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525680 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525772 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525881 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525909 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.525973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.526850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.526999 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.527212 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.529713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.534643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.549541 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.550206 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.563800 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:23 crc kubenswrapper[5136]: I0320 07:12:23.623435 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.164130 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.285700 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.510920 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerStarted","Data":"5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef"} Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.536200 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" podStartSLOduration=2.536181293 podStartE2EDuration="2.536181293s" podCreationTimestamp="2026-03-20 07:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:24.532487337 +0000 UTC m=+1376.791798498" watchObservedRunningTime="2026-03-20 07:12:24.536181293 +0000 UTC m=+1376.795492444" Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.921616 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.981724 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") pod \"4ace6934-986e-463e-8e10-ea2d38d8657b\" (UID: \"4ace6934-986e-463e-8e10-ea2d38d8657b\") " Mar 20 07:12:24 crc kubenswrapper[5136]: I0320 07:12:24.986831 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925" (OuterVolumeSpecName: "kube-api-access-xg925") pod "4ace6934-986e-463e-8e10-ea2d38d8657b" (UID: "4ace6934-986e-463e-8e10-ea2d38d8657b"). InnerVolumeSpecName "kube-api-access-xg925". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.083861 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg925\" (UniqueName: \"kubernetes.io/projected/4ace6934-986e-463e-8e10-ea2d38d8657b-kube-api-access-xg925\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.476208 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.537321 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerStarted","Data":"c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.537363 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerStarted","Data":"5e2b1f562a9d2caa514e6b0a9c5b2c39146dfbd7683f095229f8f6fd7616c3a3"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.541774 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerStarted","Data":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.541828 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerStarted","Data":"35333a6bcde4737efed42da72fdec9f87c5d7d96bc41af031566e8d824ff32aa"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.553041 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.556389 5136 generic.go:334] "Generic (PLEG): container finished" podID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerID="031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62" exitCode=0 Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.556457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerDied","Data":"031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.558490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560621 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566512-lrvjf" event={"ID":"4ace6934-986e-463e-8e10-ea2d38d8657b","Type":"ContainerDied","Data":"568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f"} Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560949 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="568d79cc1dd46d8965f516658166d9e10f03484afb5a53024438fcc72337da1f" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.560975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.976561 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:12:25 crc kubenswrapper[5136]: I0320 07:12:25.997396 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566506-bbg6r"] Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.413084 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca3533ad-761e-45d8-8a1a-0e679b602e08" path="/var/lib/kubelet/pods/ca3533ad-761e-45d8-8a1a-0e679b602e08/volumes" Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.569833 5136 generic.go:334] "Generic (PLEG): container finished" podID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerID="517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274" exitCode=0 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.569916 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerDied","Data":"517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.572374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerStarted","Data":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.572438 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" containerID="cri-o://8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.572475 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" containerID="cri-o://1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.573901 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerStarted","Data":"8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.582778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerStarted","Data":"8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a"} Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.582943 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" containerID="cri-o://c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.582974 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" containerID="cri-o://8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a" gracePeriod=30 Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.618206 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.618187331 podStartE2EDuration="5.618187331s" podCreationTimestamp="2026-03-20 07:12:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:26.611194274 +0000 UTC m=+1378.870505425" watchObservedRunningTime="2026-03-20 07:12:26.618187331 +0000 UTC m=+1378.877498482" Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.641467 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jv7f9" podStartSLOduration=2.992705396 podStartE2EDuration="30.641446656s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:58.220389446 +0000 UTC m=+1350.479700597" lastFinishedPulling="2026-03-20 07:12:25.869130706 +0000 UTC m=+1378.128441857" observedRunningTime="2026-03-20 07:12:26.63290717 +0000 UTC m=+1378.892218321" watchObservedRunningTime="2026-03-20 07:12:26.641446656 +0000 UTC m=+1378.900757807" Mar 20 07:12:26 crc kubenswrapper[5136]: I0320 07:12:26.657460 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.657426324 podStartE2EDuration="4.657426324s" podCreationTimestamp="2026-03-20 07:12:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:26.654104741 +0000 UTC m=+1378.913415892" watchObservedRunningTime="2026-03-20 07:12:26.657426324 +0000 UTC m=+1378.916737475" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.052658 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119650 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119832 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.119850 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") pod \"61300b5b-7c36-4857-a0bf-631bf3cbb001\" (UID: \"61300b5b-7c36-4857-a0bf-631bf3cbb001\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.120474 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs" (OuterVolumeSpecName: "logs") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.126060 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh" (OuterVolumeSpecName: "kube-api-access-kt5bh") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "kube-api-access-kt5bh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.150261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data" (OuterVolumeSpecName: "config-data") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.150636 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts" (OuterVolumeSpecName: "scripts") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.199691 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61300b5b-7c36-4857-a0bf-631bf3cbb001" (UID: "61300b5b-7c36-4857-a0bf-631bf3cbb001"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222630 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61300b5b-7c36-4857-a0bf-631bf3cbb001-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222740 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kt5bh\" (UniqueName: \"kubernetes.io/projected/61300b5b-7c36-4857-a0bf-631bf3cbb001-kube-api-access-kt5bh\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222760 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222793 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.222802 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61300b5b-7c36-4857-a0bf-631bf3cbb001-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.386223 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428439 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428505 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428556 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428652 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428723 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.428949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") pod \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\" (UID: \"1769e60f-b60b-4a9d-aa9c-57773220f7c0\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.431885 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs" (OuterVolumeSpecName: "logs") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.433513 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.436558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.439754 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts" (OuterVolumeSpecName: "scripts") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.461183 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d" (OuterVolumeSpecName: "kube-api-access-sl96d") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "kube-api-access-sl96d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.480651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.502298 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data" (OuterVolumeSpecName: "config-data") pod "1769e60f-b60b-4a9d-aa9c-57773220f7c0" (UID: "1769e60f-b60b-4a9d-aa9c-57773220f7c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534726 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534761 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1769e60f-b60b-4a9d-aa9c-57773220f7c0-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534792 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534803 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sl96d\" (UniqueName: \"kubernetes.io/projected/1769e60f-b60b-4a9d-aa9c-57773220f7c0-kube-api-access-sl96d\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534829 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534839 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.534846 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1769e60f-b60b-4a9d-aa9c-57773220f7c0-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.557998 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.601338 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-n6cqg" event={"ID":"61300b5b-7c36-4857-a0bf-631bf3cbb001","Type":"ContainerDied","Data":"b5e8ad0f3ddd8fc1ff41409f21a655282a69b3d531e79a60604f206a303c07a7"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.601370 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e8ad0f3ddd8fc1ff41409f21a655282a69b3d531e79a60604f206a303c07a7" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.601450 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-n6cqg" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605724 5136 generic.go:334] "Generic (PLEG): container finished" podID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerID="8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a" exitCode=0 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605763 5136 generic.go:334] "Generic (PLEG): container finished" podID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerID="c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d" exitCode=143 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605828 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerDied","Data":"8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.605862 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerDied","Data":"c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.614961 5136 generic.go:334] "Generic (PLEG): container finished" podID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" exitCode=0 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615205 5136 generic.go:334] "Generic (PLEG): container finished" podID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" exitCode=143 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerDied","Data":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerDied","Data":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615315 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1769e60f-b60b-4a9d-aa9c-57773220f7c0","Type":"ContainerDied","Data":"35333a6bcde4737efed42da72fdec9f87c5d7d96bc41af031566e8d824ff32aa"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615334 5136 scope.go:117] "RemoveContainer" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.615478 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.632429 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerID="22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836" exitCode=0 Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.632618 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerDied","Data":"22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836"} Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.636552 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.683765 5136 scope.go:117] "RemoveContainer" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.699120 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.707925 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.722777 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.738563 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.738961 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.738978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.738989 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.738997 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739007 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerName="placement-db-sync" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739015 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerName="placement-db-sync" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739033 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739038 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739049 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerName="oc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739055 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerName="oc" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.739067 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739072 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739258 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739270 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" containerName="oc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739279 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739289 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" containerName="glance-log" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739299 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" containerName="glance-httpd" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739308 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" containerName="placement-db-sync" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739663 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739704 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739722 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739737 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739758 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.739862 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ac3187ac-eebe-4584-9624-e4127b6ee040\" (UID: \"ac3187ac-eebe-4584-9624-e4127b6ee040\") " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.740292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs" (OuterVolumeSpecName: "logs") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.740397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.741197 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.744796 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.745211 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6" (OuterVolumeSpecName: "kube-api-access-lg6g6") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "kube-api-access-lg6g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.745769 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.748384 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.749964 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.767511 5136 scope.go:117] "RemoveContainer" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.775564 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": container with ID starting with 1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90 not found: ID does not exist" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.775702 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} err="failed to get container status \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": rpc error: code = NotFound desc = could not find container \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": container with ID starting with 1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.775730 5136 scope.go:117] "RemoveContainer" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: E0320 07:12:27.776692 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": container with ID starting with 8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0 not found: ID does not exist" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.776751 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} err="failed to get container status \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": rpc error: code = NotFound desc = could not find container \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": container with ID starting with 8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.777920 5136 scope.go:117] "RemoveContainer" containerID="1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.778655 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90"} err="failed to get container status \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": rpc error: code = NotFound desc = could not find container \"1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90\": container with ID starting with 1f3e3b706ddd8b31ec540347eea9a5befe79e6123cafcbe590bee8827b958c90 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.778697 5136 scope.go:117] "RemoveContainer" containerID="8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.780100 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0"} err="failed to get container status \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": rpc error: code = NotFound desc = could not find container \"8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0\": container with ID starting with 8f922b181053d15720ca779901439eb15f443d7e6eb083c68d7478946400a5c0 not found: ID does not exist" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.801391 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts" (OuterVolumeSpecName: "scripts") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.814286 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848365 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848413 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848434 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848580 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848600 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848646 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.848959 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849062 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849076 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lg6g6\" (UniqueName: \"kubernetes.io/projected/ac3187ac-eebe-4584-9624-e4127b6ee040-kube-api-access-lg6g6\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849087 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849096 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849105 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.849113 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac3187ac-eebe-4584-9624-e4127b6ee040-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.881801 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.883463 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.885710 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.886000 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-4t29g" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.886136 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.886329 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.887252 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.892151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.901843 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.935345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data" (OuterVolumeSpecName: "config-data") pod "ac3187ac-eebe-4584-9624-e4127b6ee040" (UID: "ac3187ac-eebe-4584-9624-e4127b6ee040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950503 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950545 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950580 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950848 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950876 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950928 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.950993 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951055 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951141 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951164 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951295 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951377 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.951398 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac3187ac-eebe-4584-9624-e4127b6ee040-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.952450 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.952670 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.962646 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.966950 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.969462 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.973572 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:27 crc kubenswrapper[5136]: I0320 07:12:27.990324 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.003884 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.008939 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"glance-default-internal-api-0\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053248 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053324 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053425 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.053493 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.054049 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.057976 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.058422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.059176 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.061703 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.062557 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.069990 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.071038 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"placement-f464f8686-f4nfl\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.196576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.219197 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255429 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255503 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255562 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.255676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") pod \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\" (UID: \"72e22e43-fccc-4ee4-a170-8ff8b9959c1d\") " Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.263682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.267341 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts" (OuterVolumeSpecName: "scripts") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.267489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86" (OuterVolumeSpecName: "kube-api-access-wdg86") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "kube-api-access-wdg86". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.267556 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.305547 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data" (OuterVolumeSpecName: "config-data") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.307039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72e22e43-fccc-4ee4-a170-8ff8b9959c1d" (UID: "72e22e43-fccc-4ee4-a170-8ff8b9959c1d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357872 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357901 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357911 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357921 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wdg86\" (UniqueName: \"kubernetes.io/projected/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-kube-api-access-wdg86\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357930 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.357963 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/72e22e43-fccc-4ee4-a170-8ff8b9959c1d-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.413787 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1769e60f-b60b-4a9d-aa9c-57773220f7c0" path="/var/lib/kubelet/pods/1769e60f-b60b-4a9d-aa9c-57773220f7c0/volumes" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.642652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"ac3187ac-eebe-4584-9624-e4127b6ee040","Type":"ContainerDied","Data":"5e2b1f562a9d2caa514e6b0a9c5b2c39146dfbd7683f095229f8f6fd7616c3a3"} Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.642681 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.642711 5136 scope.go:117] "RemoveContainer" containerID="8bd6ba13b920fd9d372e08641d2f4dfd1411247734ac4e2a17f92e8c7045515a" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.645546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-xztql" event={"ID":"72e22e43-fccc-4ee4-a170-8ff8b9959c1d","Type":"ContainerDied","Data":"a9fa96c65fa536fe3aa83743698eb9e3fddbb0c4cc2f21d54eee2d77fd4acf9d"} Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.645579 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9fa96c65fa536fe3aa83743698eb9e3fddbb0c4cc2f21d54eee2d77fd4acf9d" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.645619 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-xztql" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.693310 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.705393 5136 scope.go:117] "RemoveContainer" containerID="c4d6db431c35fa9e07903364d5a4d9c0b236e4c6c9d596e00a75c84d0a2d318d" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.705999 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.717888 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.733264 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: E0320 07:12:28.733710 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerName="keystone-bootstrap" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.733725 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerName="keystone-bootstrap" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.733934 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" containerName="keystone-bootstrap" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.734974 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.741171 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.741295 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 07:12:28 crc kubenswrapper[5136]: W0320 07:12:28.746201 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5249fb5b_8908_4b21_9ea3_28508854ce4a.slice/crio-c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2 WatchSource:0}: Error finding container c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2: Status 404 returned error can't find the container with id c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2 Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.763470 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:28 crc kubenswrapper[5136]: W0320 07:12:28.766694 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98f17780_5e89_47b5_a280_ff05d993aec1.slice/crio-5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017 WatchSource:0}: Error finding container 5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017: Status 404 returned error can't find the container with id 5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017 Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.785374 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.795177 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.796768 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.803250 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-6hlpf" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.803826 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.804082 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.804293 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.804542 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.806876 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.818510 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870596 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870922 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.870990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871157 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871219 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.871369 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972843 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972885 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972907 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972942 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.972959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973019 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973086 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973105 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973166 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.973191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.974067 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.974332 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.974730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.981487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.981532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:28 crc kubenswrapper[5136]: I0320 07:12:28.991904 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.002379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.010405 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.027026 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.064146 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074352 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074482 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074501 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074517 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.074634 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.081532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.082607 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.082789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.083236 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.087127 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.091790 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.097227 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.105452 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"keystone-766d94c967-pb9qd\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.115749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.229070 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.383355 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") pod \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.383497 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") pod \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.383544 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") pod \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\" (UID: \"4f5241dc-9fdc-4e75-9924-fb00a2e6119d\") " Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.405130 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n" (OuterVolumeSpecName: "kube-api-access-fp59n") pod "4f5241dc-9fdc-4e75-9924-fb00a2e6119d" (UID: "4f5241dc-9fdc-4e75-9924-fb00a2e6119d"). InnerVolumeSpecName "kube-api-access-fp59n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.426162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5241dc-9fdc-4e75-9924-fb00a2e6119d" (UID: "4f5241dc-9fdc-4e75-9924-fb00a2e6119d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.429971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config" (OuterVolumeSpecName: "config") pod "4f5241dc-9fdc-4e75-9924-fb00a2e6119d" (UID: "4f5241dc-9fdc-4e75-9924-fb00a2e6119d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.490442 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.490489 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fp59n\" (UniqueName: \"kubernetes.io/projected/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-kube-api-access-fp59n\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.490500 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4f5241dc-9fdc-4e75-9924-fb00a2e6119d-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.563973 5136 scope.go:117] "RemoveContainer" containerID="6bbf8fa191e070a1c91f2d1ea94b4f26f7559168925696c34903d08a5a0065c5" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.676152 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-kxk7p" event={"ID":"4f5241dc-9fdc-4e75-9924-fb00a2e6119d","Type":"ContainerDied","Data":"9c68bfb878007ca6b62fba813ddcba80cdee73ef5b85374234305710364f4b28"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.676187 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c68bfb878007ca6b62fba813ddcba80cdee73ef5b85374234305710364f4b28" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.676234 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-kxk7p" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.702626 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerStarted","Data":"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.702962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerStarted","Data":"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.702973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerStarted","Data":"5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.703541 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.703562 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.720425 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerStarted","Data":"c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2"} Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.744521 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.745459 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-f464f8686-f4nfl" podStartSLOduration=2.745443697 podStartE2EDuration="2.745443697s" podCreationTimestamp="2026-03-20 07:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:29.721527022 +0000 UTC m=+1381.980838173" watchObservedRunningTime="2026-03-20 07:12:29.745443697 +0000 UTC m=+1382.004754838" Mar 20 07:12:29 crc kubenswrapper[5136]: W0320 07:12:29.767241 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a82425_91b7_43b8_b26e_ace42be9cdba.slice/crio-6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2 WatchSource:0}: Error finding container 6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2: Status 404 returned error can't find the container with id 6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2 Mar 20 07:12:29 crc kubenswrapper[5136]: I0320 07:12:29.826772 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.012469 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.012683 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" containerID="cri-o://5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef" gracePeriod=10 Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.018129 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.054291 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:30 crc kubenswrapper[5136]: E0320 07:12:30.054867 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerName="neutron-db-sync" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.054878 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerName="neutron-db-sync" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.055054 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" containerName="neutron-db-sync" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.057590 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.075645 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.077891 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.084827 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.085033 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-scqxf" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.085185 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.085353 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.111371 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.122204 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209037 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209127 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209169 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209195 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209219 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209279 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209308 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209332 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.209370 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310804 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310882 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310930 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310945 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.310997 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311015 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.311081 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.312471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.313516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.314635 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.315392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.321425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.328235 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.332853 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.335774 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.338045 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"dnsmasq-dns-5d8b7b7d5-ft2q7\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.338518 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.340330 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"neutron-564b95fd68-m2j52\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.410951 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac3187ac-eebe-4584-9624-e4127b6ee040" path="/var/lib/kubelet/pods/ac3187ac-eebe-4584-9624-e4127b6ee040/volumes" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.521800 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.606104 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.746405 5136 generic.go:334] "Generic (PLEG): container finished" podID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerID="8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256" exitCode=0 Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.746561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerDied","Data":"8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.748484 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerStarted","Data":"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.748523 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerStarted","Data":"c19785656f47dd95cc1a27542636229f68d56209966c28654c4de9baa2a90613"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.748708 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.752242 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerStarted","Data":"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.755975 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerStarted","Data":"6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.759978 5136 generic.go:334] "Generic (PLEG): container finished" podID="ec92c94f-350b-410f-af36-f232e43c51bc" containerID="5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef" exitCode=0 Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.761718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerDied","Data":"5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef"} Mar 20 07:12:30 crc kubenswrapper[5136]: I0320 07:12:30.793678 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-766d94c967-pb9qd" podStartSLOduration=2.793663077 podStartE2EDuration="2.793663077s" podCreationTimestamp="2026-03-20 07:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:30.783622304 +0000 UTC m=+1383.042933455" watchObservedRunningTime="2026-03-20 07:12:30.793663077 +0000 UTC m=+1383.052974228" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.110044 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.327339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443062 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443101 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443138 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.443339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") pod \"ec92c94f-350b-410f-af36-f232e43c51bc\" (UID: \"ec92c94f-350b-410f-af36-f232e43c51bc\") " Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.453696 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4" (OuterVolumeSpecName: "kube-api-access-vd8c4") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "kube-api-access-vd8c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.479989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.496489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.517356 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.518068 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.520294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config" (OuterVolumeSpecName: "config") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.521952 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ec92c94f-350b-410f-af36-f232e43c51bc" (UID: "ec92c94f-350b-410f-af36-f232e43c51bc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545949 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd8c4\" (UniqueName: \"kubernetes.io/projected/ec92c94f-350b-410f-af36-f232e43c51bc-kube-api-access-vd8c4\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545978 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545989 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.545999 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.546009 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.546026 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ec92c94f-350b-410f-af36-f232e43c51bc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.773243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" event={"ID":"ec92c94f-350b-410f-af36-f232e43c51bc","Type":"ContainerDied","Data":"23b1640d39e4ae1160a3dad399449ea185d7930c6aa60e3ac354c8577e44362b"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.773607 5136 scope.go:117] "RemoveContainer" containerID="5b9f552fe91aa28f997b092d45d1079ab658cc3b43ebd7ab371c11c9bbbdd7ef" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.773423 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d7fb48775-2qcl2" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.777754 5136 generic.go:334] "Generic (PLEG): container finished" podID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerID="9601c3b5171c0ad2c4a37d9af2b6800e2a7b9ef5252a7eafd6ffba3913617d30" exitCode=0 Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.777799 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerDied","Data":"9601c3b5171c0ad2c4a37d9af2b6800e2a7b9ef5252a7eafd6ffba3913617d30"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.777831 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerStarted","Data":"e8f7d39c70d06f5ebbf7c96df259d250580edf42a70ced6ee3a41c2d6954fc88"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.782653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerStarted","Data":"0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.782687 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerStarted","Data":"0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.787014 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerStarted","Data":"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16"} Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.840041 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.840019698 podStartE2EDuration="4.840019698s" podCreationTimestamp="2026-03-20 07:12:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:31.823792113 +0000 UTC m=+1384.083103274" watchObservedRunningTime="2026-03-20 07:12:31.840019698 +0000 UTC m=+1384.099330849" Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.853978 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.860937 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d7fb48775-2qcl2"] Mar 20 07:12:31 crc kubenswrapper[5136]: I0320 07:12:31.866976 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.866957528 podStartE2EDuration="3.866957528s" podCreationTimestamp="2026-03-20 07:12:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:31.865967567 +0000 UTC m=+1384.125278738" watchObservedRunningTime="2026-03-20 07:12:31.866957528 +0000 UTC m=+1384.126268679" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.131531 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:12:32 crc kubenswrapper[5136]: E0320 07:12:32.132304 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="init" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.132315 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="init" Mar 20 07:12:32 crc kubenswrapper[5136]: E0320 07:12:32.132342 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.132348 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.132506 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" containerName="dnsmasq-dns" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.141066 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.141169 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261347 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261415 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261490 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261522 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261554 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.261606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362858 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362912 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362948 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.362996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363020 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363078 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.363528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.369113 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.369896 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.373997 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.374658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.375594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.382397 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"placement-dc8db4fdb-hpjdg\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.406607 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec92c94f-350b-410f-af36-f232e43c51bc" path="/var/lib/kubelet/pods/ec92c94f-350b-410f-af36-f232e43c51bc/volumes" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.511905 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.760898 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.762915 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.767136 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.767359 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.786206 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870389 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870426 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870578 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870601 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870648 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.870703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.972911 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973265 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973321 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973401 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973469 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.973517 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.983219 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.983289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.984987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.989472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.996913 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:32 crc kubenswrapper[5136]: I0320 07:12:32.999157 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:33 crc kubenswrapper[5136]: I0320 07:12:33.001003 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"neutron-6ff4f58fb9-7gtff\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:33 crc kubenswrapper[5136]: I0320 07:12:33.098954 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.593640 5136 scope.go:117] "RemoveContainer" containerID="2ae57c8c056dcb5d29e8273f89d2edf49540ec43cbd77b2ca4a8816f0f65f160" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.749050 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.810452 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") pod \"16f28a76-f7a5-4980-a693-7bd078f3c128\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.810784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") pod \"16f28a76-f7a5-4980-a693-7bd078f3c128\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.810878 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") pod \"16f28a76-f7a5-4980-a693-7bd078f3c128\" (UID: \"16f28a76-f7a5-4980-a693-7bd078f3c128\") " Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.814108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd" (OuterVolumeSpecName: "kube-api-access-6s9rd") pod "16f28a76-f7a5-4980-a693-7bd078f3c128" (UID: "16f28a76-f7a5-4980-a693-7bd078f3c128"). InnerVolumeSpecName "kube-api-access-6s9rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.816840 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "16f28a76-f7a5-4980-a693-7bd078f3c128" (UID: "16f28a76-f7a5-4980-a693-7bd078f3c128"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.828066 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jv7f9" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.828080 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jv7f9" event={"ID":"16f28a76-f7a5-4980-a693-7bd078f3c128","Type":"ContainerDied","Data":"a6c0c7cbe316e747e497558676af55967b6ed940767c0667d59d4da80f64920a"} Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.828113 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6c0c7cbe316e747e497558676af55967b6ed940767c0667d59d4da80f64920a" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.830088 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerStarted","Data":"182398618c3a1531c9ad080ffbd4a768caa0163ecc71bb0c12e322558e13d0bb"} Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.880994 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16f28a76-f7a5-4980-a693-7bd078f3c128" (UID: "16f28a76-f7a5-4980-a693-7bd078f3c128"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.918914 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6s9rd\" (UniqueName: \"kubernetes.io/projected/16f28a76-f7a5-4980-a693-7bd078f3c128-kube-api-access-6s9rd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.918941 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:34 crc kubenswrapper[5136]: I0320 07:12:34.918950 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/16f28a76-f7a5-4980-a693-7bd078f3c128-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.104470 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.337134 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:12:35 crc kubenswrapper[5136]: W0320 07:12:35.348082 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c52887a_70a8_4d00_a1f9_a5677fa48d1f.slice/crio-6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38 WatchSource:0}: Error finding container 6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38: Status 404 returned error can't find the container with id 6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38 Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.842096 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.846095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerStarted","Data":"024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.846133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerStarted","Data":"df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.846913 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.848035 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerStarted","Data":"9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.850924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerStarted","Data":"605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.850952 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerStarted","Data":"14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.850963 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerStarted","Data":"456674fb963104b875873a874337c20143adc46f3a809c5e2ae04c7d773c4641"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.851492 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.851519 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.856426 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerStarted","Data":"8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.857005 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.867486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerStarted","Data":"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.867567 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerStarted","Data":"6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38"} Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.875566 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-564b95fd68-m2j52" podStartSLOduration=5.875548372 podStartE2EDuration="5.875548372s" podCreationTimestamp="2026-03-20 07:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:35.864420505 +0000 UTC m=+1388.123731656" watchObservedRunningTime="2026-03-20 07:12:35.875548372 +0000 UTC m=+1388.134859523" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.886628 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-llt2h" podStartSLOduration=2.874945414 podStartE2EDuration="39.886610637s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:57.750745626 +0000 UTC m=+1350.010056777" lastFinishedPulling="2026-03-20 07:12:34.762410849 +0000 UTC m=+1387.021722000" observedRunningTime="2026-03-20 07:12:35.881680712 +0000 UTC m=+1388.140991873" watchObservedRunningTime="2026-03-20 07:12:35.886610637 +0000 UTC m=+1388.145921788" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.904650 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" podStartSLOduration=6.904634038 podStartE2EDuration="6.904634038s" podCreationTimestamp="2026-03-20 07:12:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:35.904587657 +0000 UTC m=+1388.163898808" watchObservedRunningTime="2026-03-20 07:12:35.904634038 +0000 UTC m=+1388.163945179" Mar 20 07:12:35 crc kubenswrapper[5136]: I0320 07:12:35.943888 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-dc8db4fdb-hpjdg" podStartSLOduration=3.938624617 podStartE2EDuration="3.938624617s" podCreationTimestamp="2026-03-20 07:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:35.926106558 +0000 UTC m=+1388.185417709" watchObservedRunningTime="2026-03-20 07:12:35.938624617 +0000 UTC m=+1388.197935768" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.100809 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:12:36 crc kubenswrapper[5136]: E0320 07:12:36.101241 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerName="barbican-db-sync" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.101255 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerName="barbican-db-sync" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.101475 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" containerName="barbican-db-sync" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.102655 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.120785 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.121026 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-tz5pc" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.121613 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.143127 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.144614 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.147120 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159700 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159836 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.159895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.188224 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.206670 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.265580 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275375 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275455 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275475 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275503 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275584 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275618 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275635 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.275677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.277142 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.282022 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.283335 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.283505 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.301419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"barbican-keystone-listener-65ccfb89b4-s479g\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.314457 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.316102 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.335966 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.364851 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.376913 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.376959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.377026 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.377060 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.377078 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.379649 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.381112 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.381222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.382433 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.389750 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.390446 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.392517 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.401633 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"barbican-worker-78df67c79-bqz8t\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.476193 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.490635 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491141 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491180 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491202 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491313 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491342 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491359 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.491401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.492156 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595236 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595356 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595404 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595466 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595526 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595539 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595557 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.595598 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.596884 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.597488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.598968 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.601501 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.602438 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.603343 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.618067 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.622741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.622885 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.626918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"barbican-api-d86fb98dd-76pm8\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.635542 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"dnsmasq-dns-7df4c9958f-99prp\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.650422 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.716957 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.916801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerStarted","Data":"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685"} Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.918993 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:12:36 crc kubenswrapper[5136]: I0320 07:12:36.936086 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6ff4f58fb9-7gtff" podStartSLOduration=4.936069815 podStartE2EDuration="4.936069815s" podCreationTimestamp="2026-03-20 07:12:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:36.933016099 +0000 UTC m=+1389.192327250" watchObservedRunningTime="2026-03-20 07:12:36.936069815 +0000 UTC m=+1389.195380966" Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.083891 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.102078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:12:37 crc kubenswrapper[5136]: W0320 07:12:37.106493 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a59ab3d_3094_4e10_bbde_44479696f752.slice/crio-adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f WatchSource:0}: Error finding container adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f: Status 404 returned error can't find the container with id adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.321151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.339679 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:37 crc kubenswrapper[5136]: W0320 07:12:37.365213 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e6c911d_6da1_440a_8d63_d61e68b0272c.slice/crio-f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb WatchSource:0}: Error finding container f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb: Status 404 returned error can't find the container with id f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.940448 5136 generic.go:334] "Generic (PLEG): container finished" podID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" exitCode=0 Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.940891 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerDied","Data":"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.940924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerStarted","Data":"f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.950765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerStarted","Data":"87dc6bb8fc1b9abd24b71389abdb4a22e7af9a9d787041070ce4c3a66cfdd142"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.963043 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerStarted","Data":"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.963098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerStarted","Data":"906ac955356980032ab967bfee58aa1175878c2109f34d9bbc6cfcc9e1a56ade"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.976241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerStarted","Data":"adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f"} Mar 20 07:12:37 crc kubenswrapper[5136]: I0320 07:12:37.976871 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" containerID="cri-o://8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a" gracePeriod=10 Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.070991 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.071040 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.133640 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.151268 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.750759 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.755575 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.761584 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.762007 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.772076 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786382 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786488 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786541 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786715 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786769 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.786795 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887740 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887890 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.887987 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.888027 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.888049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.888567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.895449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.895622 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.896627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.899447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.916837 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.924320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"barbican-api-64845646dd-wf28v\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.995099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerStarted","Data":"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf"} Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.996296 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:38 crc kubenswrapper[5136]: I0320 07:12:38.996328 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.003923 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerStarted","Data":"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd"} Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.004766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.009066 5136 generic.go:334] "Generic (PLEG): container finished" podID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerID="8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a" exitCode=0 Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.010718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerDied","Data":"8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a"} Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.010750 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.011348 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.023932 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-d86fb98dd-76pm8" podStartSLOduration=3.023911645 podStartE2EDuration="3.023911645s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:39.012858301 +0000 UTC m=+1391.272169452" watchObservedRunningTime="2026-03-20 07:12:39.023911645 +0000 UTC m=+1391.283222796" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.050900 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" podStartSLOduration=3.050883226 podStartE2EDuration="3.050883226s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:39.044040283 +0000 UTC m=+1391.303351434" watchObservedRunningTime="2026-03-20 07:12:39.050883226 +0000 UTC m=+1391.310194377" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.064873 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.065166 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.080936 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.082946 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.139071 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.140023 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192236 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192327 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192545 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.192579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") pod \"8c492e9e-5703-4622-bcb8-6d77327cd1af\" (UID: \"8c492e9e-5703-4622-bcb8-6d77327cd1af\") " Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.214014 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4" (OuterVolumeSpecName: "kube-api-access-8hrd4") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "kube-api-access-8hrd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.250303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.251324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.252210 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config" (OuterVolumeSpecName: "config") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.295316 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.301865 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.307849 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.308002 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hrd4\" (UniqueName: \"kubernetes.io/projected/8c492e9e-5703-4622-bcb8-6d77327cd1af-kube-api-access-8hrd4\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.316206 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.320972 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8c492e9e-5703-4622-bcb8-6d77327cd1af" (UID: "8c492e9e-5703-4622-bcb8-6d77327cd1af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.410314 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:39 crc kubenswrapper[5136]: I0320 07:12:39.410358 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8c492e9e-5703-4622-bcb8-6d77327cd1af-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.033484 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" event={"ID":"8c492e9e-5703-4622-bcb8-6d77327cd1af","Type":"ContainerDied","Data":"e8f7d39c70d06f5ebbf7c96df259d250580edf42a70ced6ee3a41c2d6954fc88"} Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.034187 5136 scope.go:117] "RemoveContainer" containerID="8e1ca9a5a655afee1ca95c3cf5addf49787cbb523449118713907bd8465f184a" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.033578 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d8b7b7d5-ft2q7" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.038479 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.038704 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.104082 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.111915 5136 scope.go:117] "RemoveContainer" containerID="9601c3b5171c0ad2c4a37d9af2b6800e2a7b9ef5252a7eafd6ffba3913617d30" Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.115758 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5d8b7b7d5-ft2q7"] Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.185002 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:12:40 crc kubenswrapper[5136]: W0320 07:12:40.216871 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2bf7a9d_44f9_407f_8a6c_6bc56ddde30b.slice/crio-a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884 WatchSource:0}: Error finding container a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884: Status 404 returned error can't find the container with id a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884 Mar 20 07:12:40 crc kubenswrapper[5136]: I0320 07:12:40.410508 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" path="/var/lib/kubelet/pods/8c492e9e-5703-4622-bcb8-6d77327cd1af/volumes" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.051246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerStarted","Data":"afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.051579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerStarted","Data":"cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.058906 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerStarted","Data":"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerStarted","Data":"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerStarted","Data":"a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.060499 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.064412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerStarted","Data":"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.064442 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerStarted","Data":"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0"} Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.073801 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" podStartSLOduration=2.44589167 podStartE2EDuration="5.073785313s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="2026-03-20 07:12:37.110877912 +0000 UTC m=+1389.370189063" lastFinishedPulling="2026-03-20 07:12:39.738771555 +0000 UTC m=+1391.998082706" observedRunningTime="2026-03-20 07:12:41.066450315 +0000 UTC m=+1393.325761466" watchObservedRunningTime="2026-03-20 07:12:41.073785313 +0000 UTC m=+1393.333096464" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.076684 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.110210 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-78df67c79-bqz8t" podStartSLOduration=2.499621845 podStartE2EDuration="5.110192958s" podCreationTimestamp="2026-03-20 07:12:36 +0000 UTC" firstStartedPulling="2026-03-20 07:12:37.103287556 +0000 UTC m=+1389.362598707" lastFinishedPulling="2026-03-20 07:12:39.713858659 +0000 UTC m=+1391.973169820" observedRunningTime="2026-03-20 07:12:41.109150915 +0000 UTC m=+1393.368462066" watchObservedRunningTime="2026-03-20 07:12:41.110192958 +0000 UTC m=+1393.369504109" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.136982 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-64845646dd-wf28v" podStartSLOduration=3.136952151 podStartE2EDuration="3.136952151s" podCreationTimestamp="2026-03-20 07:12:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:41.129658504 +0000 UTC m=+1393.388969655" watchObservedRunningTime="2026-03-20 07:12:41.136952151 +0000 UTC m=+1393.396263302" Mar 20 07:12:41 crc kubenswrapper[5136]: I0320 07:12:41.221250 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.088110 5136 generic.go:334] "Generic (PLEG): container finished" podID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerID="9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a" exitCode=0 Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.089205 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerDied","Data":"9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a"} Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.090216 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.538326 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.538415 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 07:12:42 crc kubenswrapper[5136]: I0320 07:12:42.615734 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:12:45 crc kubenswrapper[5136]: I0320 07:12:45.996851 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051537 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051694 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051794 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051839 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.051863 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") pod \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\" (UID: \"2fc03366-82a1-4e30-a7e8-a06e16a8a14f\") " Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.052085 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.053312 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.056282 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx" (OuterVolumeSpecName: "kube-api-access-4skhx") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "kube-api-access-4skhx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.057027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts" (OuterVolumeSpecName: "scripts") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.057868 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.088966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.107664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data" (OuterVolumeSpecName: "config-data") pod "2fc03366-82a1-4e30-a7e8-a06e16a8a14f" (UID: "2fc03366-82a1-4e30-a7e8-a06e16a8a14f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.126024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-llt2h" event={"ID":"2fc03366-82a1-4e30-a7e8-a06e16a8a14f","Type":"ContainerDied","Data":"5200ac7ba7db438f0d107516ee128664a59b5fde08b29a5ce98e40e534824c47"} Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.126070 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5200ac7ba7db438f0d107516ee128664a59b5fde08b29a5ce98e40e534824c47" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.126126 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-llt2h" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155340 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155381 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155390 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4skhx\" (UniqueName: \"kubernetes.io/projected/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-kube-api-access-4skhx\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155399 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.155407 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fc03366-82a1-4e30-a7e8-a06e16a8a14f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.653012 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.732801 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:12:46 crc kubenswrapper[5136]: I0320 07:12:46.733063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" containerID="cri-o://60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4" gracePeriod=10 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159044 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerStarted","Data":"306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115"} Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159661 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" containerID="cri-o://7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159921 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" containerID="cri-o://445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.159951 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.160013 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" containerID="cri-o://306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.160026 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" containerID="cri-o://cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570" gracePeriod=30 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.177785 5136 generic.go:334] "Generic (PLEG): container finished" podID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerID="60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4" exitCode=0 Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.177868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerDied","Data":"60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4"} Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.196069 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.19249171 podStartE2EDuration="51.196051304s" podCreationTimestamp="2026-03-20 07:11:56 +0000 UTC" firstStartedPulling="2026-03-20 07:11:58.00598099 +0000 UTC m=+1350.265292141" lastFinishedPulling="2026-03-20 07:12:46.009540574 +0000 UTC m=+1398.268851735" observedRunningTime="2026-03-20 07:12:47.187215908 +0000 UTC m=+1399.446527069" watchObservedRunningTime="2026-03-20 07:12:47.196051304 +0000 UTC m=+1399.455362445" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.280844 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.281194 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281206 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.281227 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281232 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.281255 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerName="cinder-db-sync" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281261 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerName="cinder-db-sync" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281431 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c492e9e-5703-4622-bcb8-6d77327cd1af" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.281452 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" containerName="cinder-db-sync" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.282519 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.289778 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-ps866" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.290055 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.290413 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.290906 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.300543 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.348594 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.356085 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.365270 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.369163 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377645 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377855 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377920 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.377959 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.378006 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481521 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481565 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481654 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481695 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") pod \"98a77e70-cc82-4a51-8475-d003a0ccf43e\" (UID: \"98a77e70-cc82-4a51-8475-d003a0ccf43e\") " Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481952 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.481983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482000 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482057 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482110 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482142 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482170 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482187 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482202 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.482222 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.495326 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.509181 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp" (OuterVolumeSpecName: "kube-api-access-n7dlp") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "kube-api-access-n7dlp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.513906 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.521570 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.521907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.540048 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.541921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.541941 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="init" Mar 20 07:12:47 crc kubenswrapper[5136]: E0320 07:12:47.541966 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.541976 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.542311 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.543777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.547312 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.548016 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.556681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"cinder-scheduler-0\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.577446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584434 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584549 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584912 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584961 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.584988 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585176 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585334 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585457 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.585470 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7dlp\" (UniqueName: \"kubernetes.io/projected/98a77e70-cc82-4a51-8475-d003a0ccf43e-kube-api-access-n7dlp\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.586305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.587183 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.589420 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.590373 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.594230 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.599221 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config" (OuterVolumeSpecName: "config") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.599261 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.609776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"dnsmasq-dns-8995fbb57-rp89c\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.619299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.621404 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.637810 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "98a77e70-cc82-4a51-8475-d003a0ccf43e" (UID: "98a77e70-cc82-4a51-8475-d003a0ccf43e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687264 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687318 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687434 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687466 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687499 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687519 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687567 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687578 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687587 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687595 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/98a77e70-cc82-4a51-8475-d003a0ccf43e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.687977 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.691787 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.692100 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.692391 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.694683 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.697422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.703824 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.704078 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.723442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"cinder-api-0\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " pod="openstack/cinder-api-0" Mar 20 07:12:47 crc kubenswrapper[5136]: I0320 07:12:47.886788 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.158773 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:12:48 crc kubenswrapper[5136]: W0320 07:12:48.162952 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod200895ec_fcf9_436d_82d3_c26c198e1485.slice/crio-1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d WatchSource:0}: Error finding container 1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d: Status 404 returned error can't find the container with id 1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203781 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115" exitCode=0 Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203944 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7" exitCode=2 Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203999 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a" exitCode=0 Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.203885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.204233 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.204300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.212350 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerStarted","Data":"1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.229038 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" event={"ID":"98a77e70-cc82-4a51-8475-d003a0ccf43e","Type":"ContainerDied","Data":"b87a47428636e1da6de88425ef519d1313ec7d6e857d9dadf3b1f683fef7c84b"} Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.229084 5136 scope.go:117] "RemoveContainer" containerID="60532e8d3b2de1260b02b04d62ab8b4d0eed7842744135d5425179cf256cd7d4" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.229267 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.267608 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.326436 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.335132 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7ff6d84665-6bvps"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.339564 5136 scope.go:117] "RemoveContainer" containerID="d619f45ef012eb3e909a4c91d853e16f0aac41aa3f7c34e99d85f79a8050ee1a" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.421074 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" path="/var/lib/kubelet/pods/98a77e70-cc82-4a51-8475-d003a0ccf43e/volumes" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.431498 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.448314 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:48 crc kubenswrapper[5136]: I0320 07:12:48.579213 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.272014 5136 generic.go:334] "Generic (PLEG): container finished" podID="ff72278d-b5e7-427b-8581-52ff89c57176" containerID="cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570" exitCode=0 Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.272469 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.293142 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerStarted","Data":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.293181 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerStarted","Data":"91ca9f6487866d4c1ccc97b9b95931b4963d53b6c7bb374292c03d89eecac13f"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.298083 5136 generic.go:334] "Generic (PLEG): container finished" podID="200895ec-fcf9-436d-82d3-c26c198e1485" containerID="3e0d0bab07ba893f2ec5b9f186f6e1ac58691443de33a6064347527effa3dc1f" exitCode=0 Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.298199 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerDied","Data":"3e0d0bab07ba893f2ec5b9f186f6e1ac58691443de33a6064347527effa3dc1f"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.343933 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerStarted","Data":"044038db3dbd29add28a6376877ab76e790fddb6ebde5915cf5bb01e87957d6b"} Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.698654 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743967 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.743990 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") pod \"ff72278d-b5e7-427b-8581-52ff89c57176\" (UID: \"ff72278d-b5e7-427b-8581-52ff89c57176\") " Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.744742 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.745087 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.756947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts" (OuterVolumeSpecName: "scripts") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.766706 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd" (OuterVolumeSpecName: "kube-api-access-6mmcd") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "kube-api-access-6mmcd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.785473 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846347 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846371 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846380 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ff72278d-b5e7-427b-8581-52ff89c57176-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846389 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:49 crc kubenswrapper[5136]: I0320 07:12:49.846399 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mmcd\" (UniqueName: \"kubernetes.io/projected/ff72278d-b5e7-427b-8581-52ff89c57176-kube-api-access-6mmcd\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.007695 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.018931 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data" (OuterVolumeSpecName: "config-data") pod "ff72278d-b5e7-427b-8581-52ff89c57176" (UID: "ff72278d-b5e7-427b-8581-52ff89c57176"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.053273 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.053404 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff72278d-b5e7-427b-8581-52ff89c57176-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.148073 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.389094 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ff72278d-b5e7-427b-8581-52ff89c57176","Type":"ContainerDied","Data":"b70abbe701b5afa37deb9280d8bab4f32e4ab209764879ee00b0808064143809"} Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.389123 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.389170 5136 scope.go:117] "RemoveContainer" containerID="306693c3fd00ea7d6ce01b03a7c8a7af984cbfe8170200b7df46fd72de431115" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.411225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.411252 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerStarted","Data":"01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582"} Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.411964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerStarted","Data":"2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a"} Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.429575 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" podStartSLOduration=3.429557642 podStartE2EDuration="3.429557642s" podCreationTimestamp="2026-03-20 07:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:50.420930863 +0000 UTC m=+1402.680242034" watchObservedRunningTime="2026-03-20 07:12:50.429557642 +0000 UTC m=+1402.688868793" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.431522 5136 scope.go:117] "RemoveContainer" containerID="445411040c2cad21dcc73efad8a46c4699ac626429795c9aa6391147eec609d7" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.446613 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.463890 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.499879 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500318 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500335 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500342 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500349 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500367 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500373 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: E0320 07:12:50.500381 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500390 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500542 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="sg-core" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500557 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="proxy-httpd" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500573 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-notification-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.500586 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" containerName="ceilometer-central-agent" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.502381 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.504977 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.505263 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.508979 5136 scope.go:117] "RemoveContainer" containerID="cc654c94c6c668e03f2204d6c1f1eaaff73ba2c53ec4645dd69d881bd133a570" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.511970 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.545938 5136 scope.go:117] "RemoveContainer" containerID="7336863799fdb9fab29de80cd8bd1d394cbcbcf4fd209c912a6795c7665f330a" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565209 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565317 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565405 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565442 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.565570 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667119 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667184 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667320 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.667396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.668076 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.672487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.673110 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.677511 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.685038 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.685939 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.690539 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"ceilometer-0\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " pod="openstack/ceilometer-0" Mar 20 07:12:50 crc kubenswrapper[5136]: I0320 07:12:50.822466 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.302479 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.353213 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.364467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.421879 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerStarted","Data":"b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e"} Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430221 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerStarted","Data":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430390 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" containerID="cri-o://ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430752 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.430868 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" containerID="cri-o://27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.433861 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"ffa0508cf624ffc95ed314dc6a07f83a9f8e8ec653c0427813fff7d7bbe42409"} Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.437141 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.437352 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" containerID="cri-o://e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.437496 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" containerID="cri-o://62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" gracePeriod=30 Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.460671 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.870813025 podStartE2EDuration="4.460650078s" podCreationTimestamp="2026-03-20 07:12:47 +0000 UTC" firstStartedPulling="2026-03-20 07:12:48.286001184 +0000 UTC m=+1400.545312365" lastFinishedPulling="2026-03-20 07:12:48.875838267 +0000 UTC m=+1401.135149418" observedRunningTime="2026-03-20 07:12:51.454421653 +0000 UTC m=+1403.713732824" watchObservedRunningTime="2026-03-20 07:12:51.460650078 +0000 UTC m=+1403.719961219" Mar 20 07:12:51 crc kubenswrapper[5136]: I0320 07:12:51.481176 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.481160946 podStartE2EDuration="4.481160946s" podCreationTimestamp="2026-03-20 07:12:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:51.475294943 +0000 UTC m=+1403.734606094" watchObservedRunningTime="2026-03-20 07:12:51.481160946 +0000 UTC m=+1403.740472097" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.155057 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7ff6d84665-6bvps" podUID="98a77e70-cc82-4a51-8475-d003a0ccf43e" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.145:5353: i/o timeout" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.247199 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400501 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400894 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.400979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.401006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.401046 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.401085 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") pod \"1f323747-95a7-4199-b250-bb5591a1c182\" (UID: \"1f323747-95a7-4199-b250-bb5591a1c182\") " Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.405924 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.407622 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.407961 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs" (OuterVolumeSpecName: "logs") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.408225 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts" (OuterVolumeSpecName: "scripts") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.412982 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff72278d-b5e7-427b-8581-52ff89c57176" path="/var/lib/kubelet/pods/ff72278d-b5e7-427b-8581-52ff89c57176/volumes" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.436052 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n" (OuterVolumeSpecName: "kube-api-access-pz48n") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "kube-api-access-pz48n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.441060 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.459531 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" exitCode=143 Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.459632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerDied","Data":"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.463249 5136 generic.go:334] "Generic (PLEG): container finished" podID="1f323747-95a7-4199-b250-bb5591a1c182" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" exitCode=0 Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.463289 5136 generic.go:334] "Generic (PLEG): container finished" podID="1f323747-95a7-4199-b250-bb5591a1c182" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" exitCode=143 Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.463440 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerDied","Data":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerDied","Data":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464313 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1f323747-95a7-4199-b250-bb5591a1c182","Type":"ContainerDied","Data":"91ca9f6487866d4c1ccc97b9b95931b4963d53b6c7bb374292c03d89eecac13f"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.464331 5136 scope.go:117] "RemoveContainer" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.470412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a"} Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.470903 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data" (OuterVolumeSpecName: "config-data") pod "1f323747-95a7-4199-b250-bb5591a1c182" (UID: "1f323747-95a7-4199-b250-bb5591a1c182"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.487770 5136 scope.go:117] "RemoveContainer" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503541 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503852 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1f323747-95a7-4199-b250-bb5591a1c182-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503862 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503871 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503880 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1f323747-95a7-4199-b250-bb5591a1c182-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503888 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1f323747-95a7-4199-b250-bb5591a1c182-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.503896 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz48n\" (UniqueName: \"kubernetes.io/projected/1f323747-95a7-4199-b250-bb5591a1c182-kube-api-access-pz48n\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.522379 5136 scope.go:117] "RemoveContainer" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.523203 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": container with ID starting with 27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95 not found: ID does not exist" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.523290 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} err="failed to get container status \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": rpc error: code = NotFound desc = could not find container \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": container with ID starting with 27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95 not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.523630 5136 scope.go:117] "RemoveContainer" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.524182 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": container with ID starting with ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a not found: ID does not exist" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524287 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} err="failed to get container status \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": rpc error: code = NotFound desc = could not find container \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": container with ID starting with ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524374 5136 scope.go:117] "RemoveContainer" containerID="27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524747 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95"} err="failed to get container status \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": rpc error: code = NotFound desc = could not find container \"27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95\": container with ID starting with 27b8adcc6b3237c5ffa559c230cf9f16cc30e5df8dade4ce47072a2f17129a95 not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.524857 5136 scope.go:117] "RemoveContainer" containerID="ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.525080 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a"} err="failed to get container status \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": rpc error: code = NotFound desc = could not find container \"ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a\": container with ID starting with ea7a592a4dc1f0b43f15b46ba316d17638e5d4bfc26fa0dc8bf13f22ca13dd3a not found: ID does not exist" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.692991 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.807139 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.818576 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.832526 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.832867 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.832883 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" Mar 20 07:12:52 crc kubenswrapper[5136]: E0320 07:12:52.832901 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.832907 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.833065 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api-log" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.833090 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f323747-95a7-4199-b250-bb5591a1c182" containerName="cinder-api" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.833940 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.836951 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.838773 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.839061 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.903010 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910214 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910456 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910570 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910704 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910839 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910858 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:52 crc kubenswrapper[5136]: I0320 07:12:52.910874 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013519 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013598 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013663 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013815 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013873 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.013878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.014029 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.014115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019101 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019469 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019844 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.019847 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.020282 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.021583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.033580 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"cinder-api-0\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.157640 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.485881 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae"} Mar 20 07:12:53 crc kubenswrapper[5136]: I0320 07:12:53.621163 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:12:53 crc kubenswrapper[5136]: W0320 07:12:53.625274 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76d08c01_d488_4f36_9998_7f074633c7c5.slice/crio-aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd WatchSource:0}: Error finding container aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd: Status 404 returned error can't find the container with id aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.411538 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f323747-95a7-4199-b250-bb5591a1c182" path="/var/lib/kubelet/pods/1f323747-95a7-4199-b250-bb5591a1c182/volumes" Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.507200 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6"} Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.509205 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerStarted","Data":"c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e"} Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.509246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerStarted","Data":"aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd"} Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.610052 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:35134->10.217.0.163:9311: read: connection reset by peer" Mar 20 07:12:54 crc kubenswrapper[5136]: I0320 07:12:54.610330 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-d86fb98dd-76pm8" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.163:9311/healthcheck\": read tcp 10.217.0.2:35130->10.217.0.163:9311: read: connection reset by peer" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.048116 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249893 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.249980 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.250028 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") pod \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\" (UID: \"8a79c65a-77e4-492c-bb32-5c562da1fe4c\") " Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.250470 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs" (OuterVolumeSpecName: "logs") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.256027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt" (OuterVolumeSpecName: "kube-api-access-tx2gt") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "kube-api-access-tx2gt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.258910 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.281177 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.303211 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data" (OuterVolumeSpecName: "config-data") pod "8a79c65a-77e4-492c-bb32-5c562da1fe4c" (UID: "8a79c65a-77e4-492c-bb32-5c562da1fe4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352223 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a79c65a-77e4-492c-bb32-5c562da1fe4c-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352502 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tx2gt\" (UniqueName: \"kubernetes.io/projected/8a79c65a-77e4-492c-bb32-5c562da1fe4c-kube-api-access-tx2gt\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352732 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.352950 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.353084 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a79c65a-77e4-492c-bb32-5c562da1fe4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.523946 5136 generic.go:334] "Generic (PLEG): container finished" podID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" exitCode=0 Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524061 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-d86fb98dd-76pm8" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524657 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerDied","Data":"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf"} Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-d86fb98dd-76pm8" event={"ID":"8a79c65a-77e4-492c-bb32-5c562da1fe4c","Type":"ContainerDied","Data":"906ac955356980032ab967bfee58aa1175878c2109f34d9bbc6cfcc9e1a56ade"} Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.524852 5136 scope.go:117] "RemoveContainer" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.530067 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerStarted","Data":"f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c"} Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.530190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.558878 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.5588546450000003 podStartE2EDuration="3.558854645s" podCreationTimestamp="2026-03-20 07:12:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:12:55.551436794 +0000 UTC m=+1407.810747935" watchObservedRunningTime="2026-03-20 07:12:55.558854645 +0000 UTC m=+1407.818165816" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.575202 5136 scope.go:117] "RemoveContainer" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.580641 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.597553 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-d86fb98dd-76pm8"] Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.600564 5136 scope.go:117] "RemoveContainer" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" Mar 20 07:12:55 crc kubenswrapper[5136]: E0320 07:12:55.601493 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf\": container with ID starting with 62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf not found: ID does not exist" containerID="62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.601536 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf"} err="failed to get container status \"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf\": rpc error: code = NotFound desc = could not find container \"62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf\": container with ID starting with 62d1984e6ca19370a4308bcabb96c8e6b7bf937053486fbdba44fb4199b84baf not found: ID does not exist" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.601559 5136 scope.go:117] "RemoveContainer" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" Mar 20 07:12:55 crc kubenswrapper[5136]: E0320 07:12:55.602150 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6\": container with ID starting with e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6 not found: ID does not exist" containerID="e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6" Mar 20 07:12:55 crc kubenswrapper[5136]: I0320 07:12:55.602179 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6"} err="failed to get container status \"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6\": rpc error: code = NotFound desc = could not find container \"e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6\": container with ID starting with e132fadbbb11e4fd9f33aeb8d6e1c3219f93836f26b4472b5f2584e22cc3dbe6 not found: ID does not exist" Mar 20 07:12:56 crc kubenswrapper[5136]: I0320 07:12:56.418192 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" path="/var/lib/kubelet/pods/8a79c65a-77e4-492c-bb32-5c562da1fe4c/volumes" Mar 20 07:12:56 crc kubenswrapper[5136]: I0320 07:12:56.544207 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerStarted","Data":"11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1"} Mar 20 07:12:56 crc kubenswrapper[5136]: I0320 07:12:56.580388 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.291974957 podStartE2EDuration="6.58033075s" podCreationTimestamp="2026-03-20 07:12:50 +0000 UTC" firstStartedPulling="2026-03-20 07:12:51.358550735 +0000 UTC m=+1403.617861876" lastFinishedPulling="2026-03-20 07:12:55.646906508 +0000 UTC m=+1407.906217669" observedRunningTime="2026-03-20 07:12:56.573620301 +0000 UTC m=+1408.832931492" watchObservedRunningTime="2026-03-20 07:12:56.58033075 +0000 UTC m=+1408.839641911" Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.555243 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.706114 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.810808 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.814738 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" containerID="cri-o://dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" gracePeriod=10 Mar 20 07:12:57 crc kubenswrapper[5136]: I0320 07:12:57.979340 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.041395 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.355787 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418502 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418627 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418869 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.418972 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") pod \"3e6c911d-6da1-440a-8d63-d61e68b0272c\" (UID: \"3e6c911d-6da1-440a-8d63-d61e68b0272c\") " Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.474185 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b" (OuterVolumeSpecName: "kube-api-access-62m5b") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "kube-api-access-62m5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.529641 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62m5b\" (UniqueName: \"kubernetes.io/projected/3e6c911d-6da1-440a-8d63-d61e68b0272c-kube-api-access-62m5b\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.534495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.550085 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config" (OuterVolumeSpecName: "config") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.587959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.594381 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.594724 5136 generic.go:334] "Generic (PLEG): container finished" podID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" exitCode=0 Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595000 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" containerID="cri-o://2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a" gracePeriod=30 Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595375 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerDied","Data":"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd"} Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7df4c9958f-99prp" event={"ID":"3e6c911d-6da1-440a-8d63-d61e68b0272c","Type":"ContainerDied","Data":"f5741bc42c68687793c23bf8ad98b077172ee8bf2f31364fb5498b69a4e6d1bb"} Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595752 5136 scope.go:117] "RemoveContainer" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.595800 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" containerID="cri-o://b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e" gracePeriod=30 Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.622338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3e6c911d-6da1-440a-8d63-d61e68b0272c" (UID: "3e6c911d-6da1-440a-8d63-d61e68b0272c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.632206 5136 scope.go:117] "RemoveContainer" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633140 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633172 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633184 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633193 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.633201 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3e6c911d-6da1-440a-8d63-d61e68b0272c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.652722 5136 scope.go:117] "RemoveContainer" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" Mar 20 07:12:58 crc kubenswrapper[5136]: E0320 07:12:58.653166 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd\": container with ID starting with dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd not found: ID does not exist" containerID="dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.653196 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd"} err="failed to get container status \"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd\": rpc error: code = NotFound desc = could not find container \"dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd\": container with ID starting with dbde86fea0118b25157f9df99a740b6aaef72df85e4f9d57789c87e9bc5ad6cd not found: ID does not exist" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.653216 5136 scope.go:117] "RemoveContainer" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" Mar 20 07:12:58 crc kubenswrapper[5136]: E0320 07:12:58.653465 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e\": container with ID starting with e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e not found: ID does not exist" containerID="e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.653486 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e"} err="failed to get container status \"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e\": rpc error: code = NotFound desc = could not find container \"e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e\": container with ID starting with e302dc855c2d9ff6da9e98888618b35c93b6a40c5c6e6cb5260369e82c3ad72e not found: ID does not exist" Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.924482 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:58 crc kubenswrapper[5136]: I0320 07:12:58.931698 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7df4c9958f-99prp"] Mar 20 07:12:59 crc kubenswrapper[5136]: I0320 07:12:59.924307 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:12:59 crc kubenswrapper[5136]: I0320 07:12:59.956050 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.405934 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" path="/var/lib/kubelet/pods/3e6c911d-6da1-440a-8d63-d61e68b0272c/volumes" Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.620689 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.627474 5136 generic.go:334] "Generic (PLEG): container finished" podID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerID="b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e" exitCode=0 Mar 20 07:13:00 crc kubenswrapper[5136]: I0320 07:13:00.627709 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerDied","Data":"b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e"} Mar 20 07:13:01 crc kubenswrapper[5136]: I0320 07:13:01.141441 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:13:01 crc kubenswrapper[5136]: I0320 07:13:01.640898 5136 generic.go:334] "Generic (PLEG): container finished" podID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerID="2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a" exitCode=0 Mar 20 07:13:01 crc kubenswrapper[5136]: I0320 07:13:01.641084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerDied","Data":"2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a"} Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.011250 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.101940 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102036 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102061 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102154 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.102230 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") pod \"0d918240-f8fb-459f-a116-7fce9c0068a8\" (UID: \"0d918240-f8fb-459f-a116-7fce9c0068a8\") " Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.115701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2" (OuterVolumeSpecName: "kube-api-access-nzhh2") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "kube-api-access-nzhh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.115785 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.132020 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.178983 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts" (OuterVolumeSpecName: "scripts") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204600 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0d918240-f8fb-459f-a116-7fce9c0068a8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204917 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204927 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.204935 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzhh2\" (UniqueName: \"kubernetes.io/projected/0d918240-f8fb-459f-a116-7fce9c0068a8-kube-api-access-nzhh2\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.223229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.288586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data" (OuterVolumeSpecName: "config-data") pod "0d918240-f8fb-459f-a116-7fce9c0068a8" (UID: "0d918240-f8fb-459f-a116-7fce9c0068a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.306695 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.306729 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d918240-f8fb-459f-a116-7fce9c0068a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.651866 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.651859 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0d918240-f8fb-459f-a116-7fce9c0068a8","Type":"ContainerDied","Data":"044038db3dbd29add28a6376877ab76e790fddb6ebde5915cf5bb01e87957d6b"} Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.652478 5136 scope.go:117] "RemoveContainer" containerID="b81f7d930b2cc73362e82a5dc550922c5559167f66d5246607f3e9e7db350a2e" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.694463 5136 scope.go:117] "RemoveContainer" containerID="2cfb10b9d836c0f827d7cbf404deea18e90ee9f1642613e1791248f726bfa45a" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.700865 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.715588 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732412 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732730 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732747 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732767 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732774 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732782 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732788 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732797 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.732802 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.732814 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733345 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" Mar 20 07:13:02 crc kubenswrapper[5136]: E0320 07:13:02.733375 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="init" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733385 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="init" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733566 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733585 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e6c911d-6da1-440a-8d63-d61e68b0272c" containerName="dnsmasq-dns" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733599 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="cinder-scheduler" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733622 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" containerName="probe" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.733642 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a79c65a-77e4-492c-bb32-5c562da1fe4c" containerName="barbican-api-log" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.734651 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.738130 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.748237 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818312 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818393 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818429 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818773 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.818998 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920637 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920730 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920738 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920875 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.920931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.921024 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.925741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.926018 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.930758 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.932008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:02 crc kubenswrapper[5136]: I0320 07:13:02.939800 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"cinder-scheduler-0\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " pod="openstack/cinder-scheduler-0" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.048569 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.118123 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.187217 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.187412 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-564b95fd68-m2j52" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" containerID="cri-o://df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799" gracePeriod=30 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.187752 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-564b95fd68-m2j52" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" containerID="cri-o://024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb" gracePeriod=30 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.595204 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:13:03 crc kubenswrapper[5136]: W0320 07:13:03.597164 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31adef78_59fe_4327_9586_0c12177c7bb7.slice/crio-68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0 WatchSource:0}: Error finding container 68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0: Status 404 returned error can't find the container with id 68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.635570 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.677621 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerStarted","Data":"68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0"} Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.688542 5136 generic.go:334] "Generic (PLEG): container finished" podID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerID="024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb" exitCode=0 Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.688611 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerDied","Data":"024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb"} Mar 20 07:13:03 crc kubenswrapper[5136]: I0320 07:13:03.949755 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.003958 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.004237 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f464f8686-f4nfl" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" containerID="cri-o://d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" gracePeriod=30 Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.004649 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-f464f8686-f4nfl" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" containerID="cri-o://ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" gracePeriod=30 Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.411797 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d918240-f8fb-459f-a116-7fce9c0068a8" path="/var/lib/kubelet/pods/0d918240-f8fb-459f-a116-7fce9c0068a8/volumes" Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.746375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerStarted","Data":"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c"} Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.757563 5136 generic.go:334] "Generic (PLEG): container finished" podID="98f17780-5e89-47b5-a280-ff05d993aec1" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" exitCode=143 Mar 20 07:13:04 crc kubenswrapper[5136]: I0320 07:13:04.757608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerDied","Data":"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd"} Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.233258 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.234704 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.236558 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.239407 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.239465 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-z5th8" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.246525 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371544 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371667 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.371747 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.473774 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.474633 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.474799 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.474932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.475074 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.483377 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.489318 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.491955 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"openstackclient\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.552346 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.757535 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.773611 5136 generic.go:334] "Generic (PLEG): container finished" podID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerID="df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799" exitCode=0 Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.773691 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerDied","Data":"df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799"} Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.776486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerStarted","Data":"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb"} Mar 20 07:13:05 crc kubenswrapper[5136]: I0320 07:13:05.819093 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.81907462 podStartE2EDuration="3.81907462s" podCreationTimestamp="2026-03-20 07:13:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:05.810027588 +0000 UTC m=+1418.069338739" watchObservedRunningTime="2026-03-20 07:13:05.81907462 +0000 UTC m=+1418.078385771" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.027885 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 07:13:06 crc kubenswrapper[5136]: W0320 07:13:06.030016 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17ad787b_18bc_4afd_840b_2458b494094a.slice/crio-c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4 WatchSource:0}: Error finding container c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4: Status 404 returned error can't find the container with id c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4 Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.197531 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293556 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293659 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293718 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.293772 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") pod \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\" (UID: \"2ae7d29f-d050-4d87-b59e-1237f7f6d48a\") " Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.299387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.305027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v" (OuterVolumeSpecName: "kube-api-access-45w7v") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "kube-api-access-45w7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.388966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config" (OuterVolumeSpecName: "config") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.396483 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.396515 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45w7v\" (UniqueName: \"kubernetes.io/projected/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-kube-api-access-45w7v\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.396525 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.404878 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.439764 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "2ae7d29f-d050-4d87-b59e-1237f7f6d48a" (UID: "2ae7d29f-d050-4d87-b59e-1237f7f6d48a"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.498235 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.498271 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ae7d29f-d050-4d87-b59e-1237f7f6d48a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.785475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"17ad787b-18bc-4afd-840b-2458b494094a","Type":"ContainerStarted","Data":"c030f875e5ff027f4f05e9d519ba652a96422a4a9f30e0ce0b1699cab2e278b4"} Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.788084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-564b95fd68-m2j52" event={"ID":"2ae7d29f-d050-4d87-b59e-1237f7f6d48a","Type":"ContainerDied","Data":"182398618c3a1531c9ad080ffbd4a768caa0163ecc71bb0c12e322558e13d0bb"} Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.788125 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-564b95fd68-m2j52" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.788146 5136 scope.go:117] "RemoveContainer" containerID="024fcc0cd809e83faec73e5ba56c99a83ab40c7bc4ad09d07aea8ace13ee29fb" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.818479 5136 scope.go:117] "RemoveContainer" containerID="df57270fa341245294b8409621c8d255f8f17bac5716eb17c148b55857569799" Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.818672 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:13:06 crc kubenswrapper[5136]: I0320 07:13:06.828407 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-564b95fd68-m2j52"] Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.615744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720686 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720761 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720866 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720919 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720939 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.720954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") pod \"98f17780-5e89-47b5-a280-ff05d993aec1\" (UID: \"98f17780-5e89-47b5-a280-ff05d993aec1\") " Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.721082 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs" (OuterVolumeSpecName: "logs") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.721726 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/98f17780-5e89-47b5-a280-ff05d993aec1-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.727397 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk" (OuterVolumeSpecName: "kube-api-access-mv6wk") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "kube-api-access-mv6wk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.727904 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts" (OuterVolumeSpecName: "scripts") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.780324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.783677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data" (OuterVolumeSpecName: "config-data") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.803709 5136 generic.go:334] "Generic (PLEG): container finished" podID="98f17780-5e89-47b5-a280-ff05d993aec1" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" exitCode=0 Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.803849 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-f464f8686-f4nfl" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.803833 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerDied","Data":"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95"} Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.804013 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-f464f8686-f4nfl" event={"ID":"98f17780-5e89-47b5-a280-ff05d993aec1","Type":"ContainerDied","Data":"5651bf9b6915e66b83ba1121283006e83d01461bec1d3f8053fa31cecb4a7017"} Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.804040 5136 scope.go:117] "RemoveContainer" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825193 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825224 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825234 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mv6wk\" (UniqueName: \"kubernetes.io/projected/98f17780-5e89-47b5-a280-ff05d993aec1-kube-api-access-mv6wk\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.825263 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.832485 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.833703 5136 scope.go:117] "RemoveContainer" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.834283 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "98f17780-5e89-47b5-a280-ff05d993aec1" (UID: "98f17780-5e89-47b5-a280-ff05d993aec1"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.858445 5136 scope.go:117] "RemoveContainer" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" Mar 20 07:13:07 crc kubenswrapper[5136]: E0320 07:13:07.861048 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95\": container with ID starting with ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95 not found: ID does not exist" containerID="ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.861103 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95"} err="failed to get container status \"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95\": rpc error: code = NotFound desc = could not find container \"ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95\": container with ID starting with ca789654434e2938058e89c8bf27e5d6e9f18a69d4566b7ab05c972c6642de95 not found: ID does not exist" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.861126 5136 scope.go:117] "RemoveContainer" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" Mar 20 07:13:07 crc kubenswrapper[5136]: E0320 07:13:07.861516 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd\": container with ID starting with d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd not found: ID does not exist" containerID="d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.861566 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd"} err="failed to get container status \"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd\": rpc error: code = NotFound desc = could not find container \"d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd\": container with ID starting with d15adc608e61a192a134211ec88732de0cf0a72471089b746f1c3d17455f2ccd not found: ID does not exist" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.926377 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:07 crc kubenswrapper[5136]: I0320 07:13:07.926406 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/98f17780-5e89-47b5-a280-ff05d993aec1-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.050082 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.136332 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.145436 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-f464f8686-f4nfl"] Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.336481 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337501 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337586 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337678 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337738 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337795 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337869 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" Mar 20 07:13:08 crc kubenswrapper[5136]: E0320 07:13:08.337933 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.337981 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338252 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-api" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338314 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" containerName="neutron-httpd" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.338371 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" containerName="placement-log" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.339300 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.344083 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.344360 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.344363 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.377289 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.424171 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ae7d29f-d050-4d87-b59e-1237f7f6d48a" path="/var/lib/kubelet/pods/2ae7d29f-d050-4d87-b59e-1237f7f6d48a/volumes" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.425029 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f17780-5e89-47b5-a280-ff05d993aec1" path="/var/lib/kubelet/pods/98f17780-5e89-47b5-a280-ff05d993aec1/volumes" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434120 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434158 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434192 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434210 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434231 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434634 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.434901 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536122 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536212 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536242 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536303 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536329 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.536357 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.537470 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.540651 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.545640 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.547927 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.549685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.551461 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.561654 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.577030 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"swift-proxy-744d6f84fc-bqcsc\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:08 crc kubenswrapper[5136]: I0320 07:13:08.656921 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.234374 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.829907 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerStarted","Data":"793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31"} Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830219 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerStarted","Data":"0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac"} Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830232 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerStarted","Data":"1d45fa03e9e760b3fecb6f7927ee88ef303052eb3da3a45f5cb31589469d2afb"} Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830266 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.830283 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:09 crc kubenswrapper[5136]: I0320 07:13:09.852757 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-744d6f84fc-bqcsc" podStartSLOduration=1.852742616 podStartE2EDuration="1.852742616s" podCreationTimestamp="2026-03-20 07:13:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:09.852003702 +0000 UTC m=+1422.111314873" watchObservedRunningTime="2026-03-20 07:13:09.852742616 +0000 UTC m=+1422.112053767" Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.011055 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.011359 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" containerID="cri-o://da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.012006 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" containerID="cri-o://d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.012089 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" containerID="cri-o://9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.012134 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" containerID="cri-o://11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1" gracePeriod=30 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.119900 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.168:3000/\": read tcp 10.217.0.2:51860->10.217.0.168:3000: read: connection reset by peer" Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.846947 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1" exitCode=0 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.846991 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6" exitCode=2 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847001 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae" exitCode=0 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847008 5136 generic.go:334] "Generic (PLEG): container finished" podID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerID="da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a" exitCode=0 Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847370 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847731 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.847762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a"} Mar 20 07:13:10 crc kubenswrapper[5136]: I0320 07:13:10.928725 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005342 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005400 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005563 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005649 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.005683 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") pod \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\" (UID: \"96e356ef-4c69-41e5-b9ae-14c7faadf1b2\") " Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.018333 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts" (OuterVolumeSpecName: "scripts") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.018494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.021205 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.029278 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw" (OuterVolumeSpecName: "kube-api-access-dlljw") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "kube-api-access-dlljw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.100187 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.107926 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.107993 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dlljw\" (UniqueName: \"kubernetes.io/projected/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-kube-api-access-dlljw\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.108005 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.108012 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.108022 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.149958 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data" (OuterVolumeSpecName: "config-data") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.193966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96e356ef-4c69-41e5-b9ae-14c7faadf1b2" (UID: "96e356ef-4c69-41e5-b9ae-14c7faadf1b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.209563 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.209603 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96e356ef-4c69-41e5-b9ae-14c7faadf1b2-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.859026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96e356ef-4c69-41e5-b9ae-14c7faadf1b2","Type":"ContainerDied","Data":"ffa0508cf624ffc95ed314dc6a07f83a9f8e8ec653c0427813fff7d7bbe42409"} Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.859378 5136 scope.go:117] "RemoveContainer" containerID="11805be8e1ecba0ecec0d4173e4555e07b6e94d856d2b60561b740f2c3a233f1" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.859118 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.903662 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.916189 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.926578 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927028 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927046 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927055 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927081 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927087 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" Mar 20 07:13:11 crc kubenswrapper[5136]: E0320 07:13:11.927103 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927109 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927264 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-central-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927279 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="sg-core" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927301 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="proxy-httpd" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.927309 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" containerName="ceilometer-notification-agent" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.929478 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.933308 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.934586 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:11 crc kubenswrapper[5136]: I0320 07:13:11.938622 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031655 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031727 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031768 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031832 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031892 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.031921 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.133862 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.133936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.133977 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134133 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134153 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134699 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.134761 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.139431 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.139849 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.141423 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.148447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.150767 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"ceilometer-0\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.274804 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:12 crc kubenswrapper[5136]: I0320 07:13:12.407290 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96e356ef-4c69-41e5-b9ae-14c7faadf1b2" path="/var/lib/kubelet/pods/96e356ef-4c69-41e5-b9ae-14c7faadf1b2/volumes" Mar 20 07:13:13 crc kubenswrapper[5136]: I0320 07:13:13.276266 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 07:13:15 crc kubenswrapper[5136]: I0320 07:13:15.499251 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.165588 5136 scope.go:117] "RemoveContainer" containerID="d634927012184728bf43eb92652caca4a427eb3140fa0f169ebc49e4b1103bd6" Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.287773 5136 scope.go:117] "RemoveContainer" containerID="9103ade7cd09df25f48e36378a2bb9597e36c48024e753bdbfbdd5238e038dae" Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.415938 5136 scope.go:117] "RemoveContainer" containerID="da66db3f467f27617e353e5b7cb9122c8ae01fca01e9365d62e83fc76b60055a" Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.675319 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.912408 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"b1ecb08e86fb97ca8048b41dfca62020c1e1817555ee06471950360149ce18be"} Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.914099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"17ad787b-18bc-4afd-840b-2458b494094a","Type":"ContainerStarted","Data":"ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af"} Mar 20 07:13:16 crc kubenswrapper[5136]: I0320 07:13:16.943533 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.7326064030000001 podStartE2EDuration="11.943514332s" podCreationTimestamp="2026-03-20 07:13:05 +0000 UTC" firstStartedPulling="2026-03-20 07:13:06.031735558 +0000 UTC m=+1418.291046709" lastFinishedPulling="2026-03-20 07:13:16.242643487 +0000 UTC m=+1428.501954638" observedRunningTime="2026-03-20 07:13:16.934935645 +0000 UTC m=+1429.194246816" watchObservedRunningTime="2026-03-20 07:13:16.943514332 +0000 UTC m=+1429.202825473" Mar 20 07:13:17 crc kubenswrapper[5136]: I0320 07:13:17.924266 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5"} Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.661850 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.662186 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.935579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49"} Mar 20 07:13:18 crc kubenswrapper[5136]: I0320 07:13:18.935969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb"} Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.007332 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.008722 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.017977 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.080137 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.080247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.112562 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.113666 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.123675 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.124857 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.127690 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.141206 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187526 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187715 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.187744 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.188690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.199138 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.228315 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"nova-api-db-create-2sj8m\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289672 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289862 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.289892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.290713 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.325639 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.330170 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.331510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.335514 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"nova-cell0-db-create-4jdnj\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.364243 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.365413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.367418 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.406704 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.406830 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.406952 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.407031 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.409545 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.433923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.435584 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"nova-api-e3bd-account-create-update-zlrc6\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.439906 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.439938 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.445180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.508879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.509300 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.509325 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.509367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.510725 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.528069 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.529351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"nova-cell1-db-create-xpg98\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.530570 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.533733 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.548255 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611046 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611162 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611233 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.611871 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.640407 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"nova-cell0-0f90-account-create-update-lv952\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.712704 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.713103 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.713779 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.735502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"nova-cell1-0423-account-create-update-ntmkb\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.809064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.821879 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.853378 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.887408 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955688 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerStarted","Data":"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11"} Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955769 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" containerID="cri-o://5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955830 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" containerID="cri-o://34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955900 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" containerID="cri-o://ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.955946 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" containerID="cri-o://b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" gracePeriod=30 Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.956258 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.964793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2sj8m" event={"ID":"bfbcdb71-4e43-4243-a408-08d69b6d7328","Type":"ContainerStarted","Data":"df133601606c7da3d6d3058e70578121e914114c2561a58b46f39470ce91218f"} Mar 20 07:13:20 crc kubenswrapper[5136]: I0320 07:13:20.989689 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.05689405 podStartE2EDuration="9.989674169s" podCreationTimestamp="2026-03-20 07:13:11 +0000 UTC" firstStartedPulling="2026-03-20 07:13:16.680950891 +0000 UTC m=+1428.940262052" lastFinishedPulling="2026-03-20 07:13:20.61373102 +0000 UTC m=+1432.873042171" observedRunningTime="2026-03-20 07:13:20.977252942 +0000 UTC m=+1433.236564093" watchObservedRunningTime="2026-03-20 07:13:20.989674169 +0000 UTC m=+1433.248985320" Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.042637 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.128997 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:13:21 crc kubenswrapper[5136]: W0320 07:13:21.144294 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91601d4_11a0_4327_8f7e_6856df2b4643.slice/crio-af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423 WatchSource:0}: Error finding container af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423: Status 404 returned error can't find the container with id af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.333595 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.421590 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:13:21 crc kubenswrapper[5136]: W0320 07:13:21.426107 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a1492b7_73df_440c_9246_ae0e3c2e8802.slice/crio-0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e WatchSource:0}: Error finding container 0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e: Status 404 returned error can't find the container with id 0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.500134 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:13:21 crc kubenswrapper[5136]: W0320 07:13:21.633663 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7fd262d5_bfc7_49ae_908e_709fa9d0f55f.slice/crio-d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c WatchSource:0}: Error finding container d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c: Status 404 returned error can't find the container with id d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.975000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerStarted","Data":"76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.975509 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerStarted","Data":"0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978682 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978713 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" exitCode=2 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978723 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978770 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.978786 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.986837 5136 generic.go:334] "Generic (PLEG): container finished" podID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerID="6963baa6fe7d9db38870a70531888cfee8f7d44c3eff1597da33cf867ee591c8" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.986985 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" event={"ID":"f91601d4-11a0-4327-8f7e-6856df2b4643","Type":"ContainerDied","Data":"6963baa6fe7d9db38870a70531888cfee8f7d44c3eff1597da33cf867ee591c8"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.987009 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" event={"ID":"f91601d4-11a0-4327-8f7e-6856df2b4643","Type":"ContainerStarted","Data":"af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.991591 5136 generic.go:334] "Generic (PLEG): container finished" podID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerID="edc5e28eb62af197edd849dc06e38cdd2bebac736971174a120fd4afd95e52b2" exitCode=0 Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.991787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2sj8m" event={"ID":"bfbcdb71-4e43-4243-a408-08d69b6d7328","Type":"ContainerDied","Data":"edc5e28eb62af197edd849dc06e38cdd2bebac736971174a120fd4afd95e52b2"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.995182 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerStarted","Data":"22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.995276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerStarted","Data":"014c5744df20c0ca1d0890e233c317c3d3d1672f4d7a42bb3df502680f5a6c07"} Mar 20 07:13:21 crc kubenswrapper[5136]: I0320 07:13:21.996535 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-0f90-account-create-update-lv952" podStartSLOduration=1.996513582 podStartE2EDuration="1.996513582s" podCreationTimestamp="2026-03-20 07:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:21.987476831 +0000 UTC m=+1434.246787982" watchObservedRunningTime="2026-03-20 07:13:21.996513582 +0000 UTC m=+1434.255824733" Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.004702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerStarted","Data":"0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.004942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerStarted","Data":"d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.007167 5136 generic.go:334] "Generic (PLEG): container finished" podID="edb3559d-359a-4add-8216-afb68a19e111" containerID="89325ee63cd0d5963c16a3cd15b18e01966cac4c73b616f8222bb05ec0a94fbe" exitCode=0 Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.007275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdnj" event={"ID":"edb3559d-359a-4add-8216-afb68a19e111","Type":"ContainerDied","Data":"89325ee63cd0d5963c16a3cd15b18e01966cac4c73b616f8222bb05ec0a94fbe"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.007357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdnj" event={"ID":"edb3559d-359a-4add-8216-afb68a19e111","Type":"ContainerStarted","Data":"984fdff2ffc75a0094b3b874f3d37a6807c50a6a1b4538f4b70820bcd412bea0"} Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.023981 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-xpg98" podStartSLOduration=2.023960397 podStartE2EDuration="2.023960397s" podCreationTimestamp="2026-03-20 07:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:22.015445622 +0000 UTC m=+1434.274756773" watchObservedRunningTime="2026-03-20 07:13:22.023960397 +0000 UTC m=+1434.283271548" Mar 20 07:13:22 crc kubenswrapper[5136]: I0320 07:13:22.061752 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" podStartSLOduration=2.061733682 podStartE2EDuration="2.061733682s" podCreationTimestamp="2026-03-20 07:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:22.044112263 +0000 UTC m=+1434.303423424" watchObservedRunningTime="2026-03-20 07:13:22.061733682 +0000 UTC m=+1434.321044833" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.016200 5136 generic.go:334] "Generic (PLEG): container finished" podID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerID="0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278" exitCode=0 Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.016276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerDied","Data":"0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278"} Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.017792 5136 generic.go:334] "Generic (PLEG): container finished" podID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerID="76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99" exitCode=0 Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.017875 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerDied","Data":"76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99"} Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.020767 5136 generic.go:334] "Generic (PLEG): container finished" podID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerID="22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74" exitCode=0 Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.020872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerDied","Data":"22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74"} Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.389592 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.473282 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") pod \"f91601d4-11a0-4327-8f7e-6856df2b4643\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.473438 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") pod \"f91601d4-11a0-4327-8f7e-6856df2b4643\" (UID: \"f91601d4-11a0-4327-8f7e-6856df2b4643\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.474106 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f91601d4-11a0-4327-8f7e-6856df2b4643" (UID: "f91601d4-11a0-4327-8f7e-6856df2b4643"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.474781 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f91601d4-11a0-4327-8f7e-6856df2b4643-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.479660 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq" (OuterVolumeSpecName: "kube-api-access-j5xcq") pod "f91601d4-11a0-4327-8f7e-6856df2b4643" (UID: "f91601d4-11a0-4327-8f7e-6856df2b4643"). InnerVolumeSpecName "kube-api-access-j5xcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.482194 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.525149 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") pod \"bfbcdb71-4e43-4243-a408-08d69b6d7328\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576214 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") pod \"bfbcdb71-4e43-4243-a408-08d69b6d7328\" (UID: \"bfbcdb71-4e43-4243-a408-08d69b6d7328\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576703 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5xcq\" (UniqueName: \"kubernetes.io/projected/f91601d4-11a0-4327-8f7e-6856df2b4643-kube-api-access-j5xcq\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.576701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bfbcdb71-4e43-4243-a408-08d69b6d7328" (UID: "bfbcdb71-4e43-4243-a408-08d69b6d7328"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.580127 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2" (OuterVolumeSpecName: "kube-api-access-t25k2") pod "bfbcdb71-4e43-4243-a408-08d69b6d7328" (UID: "bfbcdb71-4e43-4243-a408-08d69b6d7328"). InnerVolumeSpecName "kube-api-access-t25k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.677968 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") pod \"edb3559d-359a-4add-8216-afb68a19e111\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") pod \"edb3559d-359a-4add-8216-afb68a19e111\" (UID: \"edb3559d-359a-4add-8216-afb68a19e111\") " Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678356 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "edb3559d-359a-4add-8216-afb68a19e111" (UID: "edb3559d-359a-4add-8216-afb68a19e111"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678601 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t25k2\" (UniqueName: \"kubernetes.io/projected/bfbcdb71-4e43-4243-a408-08d69b6d7328-kube-api-access-t25k2\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678623 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bfbcdb71-4e43-4243-a408-08d69b6d7328-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.678635 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/edb3559d-359a-4add-8216-afb68a19e111-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.681515 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm" (OuterVolumeSpecName: "kube-api-access-dvtrm") pod "edb3559d-359a-4add-8216-afb68a19e111" (UID: "edb3559d-359a-4add-8216-afb68a19e111"). InnerVolumeSpecName "kube-api-access-dvtrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:23 crc kubenswrapper[5136]: I0320 07:13:23.780839 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dvtrm\" (UniqueName: \"kubernetes.io/projected/edb3559d-359a-4add-8216-afb68a19e111-kube-api-access-dvtrm\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.032053 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" event={"ID":"f91601d4-11a0-4327-8f7e-6856df2b4643","Type":"ContainerDied","Data":"af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423"} Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.032121 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8e134c0a47ca46cccc6237d8e45d49ba32aceec4f114111cd0acda77a55423" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.032067 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-zlrc6" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.045384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-2sj8m" event={"ID":"bfbcdb71-4e43-4243-a408-08d69b6d7328","Type":"ContainerDied","Data":"df133601606c7da3d6d3058e70578121e914114c2561a58b46f39470ce91218f"} Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.045419 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df133601606c7da3d6d3058e70578121e914114c2561a58b46f39470ce91218f" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.045421 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-2sj8m" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.047453 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-4jdnj" event={"ID":"edb3559d-359a-4add-8216-afb68a19e111","Type":"ContainerDied","Data":"984fdff2ffc75a0094b3b874f3d37a6807c50a6a1b4538f4b70820bcd412bea0"} Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.047489 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984fdff2ffc75a0094b3b874f3d37a6807c50a6a1b4538f4b70820bcd412bea0" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.047491 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-4jdnj" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.571779 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.589944 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.600540 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.627661 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.627945 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" containerID="cri-o://0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1" gracePeriod=30 Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.628001 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" containerID="cri-o://0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d" gracePeriod=30 Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.692884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") pod \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") pod \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\" (UID: \"7fd262d5-bfc7-49ae-908e-709fa9d0f55f\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693467 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") pod \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693560 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") pod \"2a1492b7-73df-440c-9246-ae0e3c2e8802\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693651 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") pod \"2a1492b7-73df-440c-9246-ae0e3c2e8802\" (UID: \"2a1492b7-73df-440c-9246-ae0e3c2e8802\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693751 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fdfd9851-96cd-483e-9e66-b1cc255cb3e2" (UID: "fdfd9851-96cd-483e-9e66-b1cc255cb3e2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693756 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") pod \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\" (UID: \"fdfd9851-96cd-483e-9e66-b1cc255cb3e2\") " Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.694570 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.693884 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7fd262d5-bfc7-49ae-908e-709fa9d0f55f" (UID: "7fd262d5-bfc7-49ae-908e-709fa9d0f55f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.694179 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2a1492b7-73df-440c-9246-ae0e3c2e8802" (UID: "2a1492b7-73df-440c-9246-ae0e3c2e8802"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.699873 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx" (OuterVolumeSpecName: "kube-api-access-n9wlx") pod "2a1492b7-73df-440c-9246-ae0e3c2e8802" (UID: "2a1492b7-73df-440c-9246-ae0e3c2e8802"). InnerVolumeSpecName "kube-api-access-n9wlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.700558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf" (OuterVolumeSpecName: "kube-api-access-vk2sf") pod "7fd262d5-bfc7-49ae-908e-709fa9d0f55f" (UID: "7fd262d5-bfc7-49ae-908e-709fa9d0f55f"). InnerVolumeSpecName "kube-api-access-vk2sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.705917 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg" (OuterVolumeSpecName: "kube-api-access-xm9sg") pod "fdfd9851-96cd-483e-9e66-b1cc255cb3e2" (UID: "fdfd9851-96cd-483e-9e66-b1cc255cb3e2"). InnerVolumeSpecName "kube-api-access-xm9sg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796409 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm9sg\" (UniqueName: \"kubernetes.io/projected/fdfd9851-96cd-483e-9e66-b1cc255cb3e2-kube-api-access-xm9sg\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796466 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796479 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vk2sf\" (UniqueName: \"kubernetes.io/projected/7fd262d5-bfc7-49ae-908e-709fa9d0f55f-kube-api-access-vk2sf\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796492 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2a1492b7-73df-440c-9246-ae0e3c2e8802-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:24 crc kubenswrapper[5136]: I0320 07:13:24.796503 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9wlx\" (UniqueName: \"kubernetes.io/projected/2a1492b7-73df-440c-9246-ae0e3c2e8802-kube-api-access-n9wlx\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.057667 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-lv952" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.057953 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-lv952" event={"ID":"2a1492b7-73df-440c-9246-ae0e3c2e8802","Type":"ContainerDied","Data":"0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.058013 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e1ca15b07b2733ca7965ba422cda8fcd277942247dadeac4ec18d6a4869db7e" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.060330 5136 generic.go:334] "Generic (PLEG): container finished" podID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerID="0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1" exitCode=143 Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.060410 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerDied","Data":"0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.062506 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-xpg98" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.062867 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-xpg98" event={"ID":"fdfd9851-96cd-483e-9e66-b1cc255cb3e2","Type":"ContainerDied","Data":"014c5744df20c0ca1d0890e233c317c3d3d1672f4d7a42bb3df502680f5a6c07"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.062980 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="014c5744df20c0ca1d0890e233c317c3d3d1672f4d7a42bb3df502680f5a6c07" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.064600 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" event={"ID":"7fd262d5-bfc7-49ae-908e-709fa9d0f55f","Type":"ContainerDied","Data":"d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c"} Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.064625 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d408b64e16a1a16cb3c2ba0552e5053433fcf11939b0c313e88451cde94de24c" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.064678 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-ntmkb" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.731732 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.733435 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" containerID="cri-o://2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" gracePeriod=30 Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.734354 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" containerID="cri-o://7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" gracePeriod=30 Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.848409 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922154 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922558 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922661 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922728 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922754 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") pod \"970a5b8d-94f8-4638-b351-40867c27568a\" (UID: \"970a5b8d-94f8-4638-b351-40867c27568a\") " Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.922985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.923390 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.923411 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/970a5b8d-94f8-4638-b351-40867c27568a-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.928492 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts" (OuterVolumeSpecName: "scripts") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.932988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp" (OuterVolumeSpecName: "kube-api-access-b5ppp") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "kube-api-access-b5ppp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.956655 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:25 crc kubenswrapper[5136]: I0320 07:13:25.996132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.015242 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data" (OuterVolumeSpecName: "config-data") pod "970a5b8d-94f8-4638-b351-40867c27568a" (UID: "970a5b8d-94f8-4638-b351-40867c27568a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029846 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029881 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029919 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b5ppp\" (UniqueName: \"kubernetes.io/projected/970a5b8d-94f8-4638-b351-40867c27568a-kube-api-access-b5ppp\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029930 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.029941 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/970a5b8d-94f8-4638-b351-40867c27568a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.076242 5136 generic.go:334] "Generic (PLEG): container finished" podID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" exitCode=143 Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.076303 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerDied","Data":"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12"} Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079134 5136 generic.go:334] "Generic (PLEG): container finished" podID="970a5b8d-94f8-4638-b351-40867c27568a" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" exitCode=0 Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079166 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5"} Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"970a5b8d-94f8-4638-b351-40867c27568a","Type":"ContainerDied","Data":"b1ecb08e86fb97ca8048b41dfca62020c1e1817555ee06471950360149ce18be"} Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079204 5136 scope.go:117] "RemoveContainer" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.079242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.105059 5136 scope.go:117] "RemoveContainer" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.120636 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.129644 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141428 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141758 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141772 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141790 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141798 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141822 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141829 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141840 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141847 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141855 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141861 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141872 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141877 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141888 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141894 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141904 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edb3559d-359a-4add-8216-afb68a19e111" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141910 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="edb3559d-359a-4add-8216-afb68a19e111" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141919 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141924 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.141941 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.141947 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142113 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="sg-core" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142124 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="proxy-httpd" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142132 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142144 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142154 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142169 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-central-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142184 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="edb3559d-359a-4add-8216-afb68a19e111" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142196 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" containerName="mariadb-database-create" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142211 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" containerName="mariadb-account-create-update" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.142218 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="970a5b8d-94f8-4638-b351-40867c27568a" containerName="ceilometer-notification-agent" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.143696 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.144873 5136 scope.go:117] "RemoveContainer" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.152846 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.153013 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.165681 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.191079 5136 scope.go:117] "RemoveContainer" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.207588 5136 scope.go:117] "RemoveContainer" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.208177 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11\": container with ID starting with 34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11 not found: ID does not exist" containerID="34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.208245 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11"} err="failed to get container status \"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11\": rpc error: code = NotFound desc = could not find container \"34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11\": container with ID starting with 34f70125332213277198b4aa08cb4ab1e11269e86d48316d0f7016fd34641a11 not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.208275 5136 scope.go:117] "RemoveContainer" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.209923 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49\": container with ID starting with ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49 not found: ID does not exist" containerID="ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.209971 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49"} err="failed to get container status \"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49\": rpc error: code = NotFound desc = could not find container \"ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49\": container with ID starting with ffe7e9a36109064a023ce4deecb9d17707e69376558ef1e507130c92da224a49 not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.209997 5136 scope.go:117] "RemoveContainer" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.210385 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb\": container with ID starting with b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb not found: ID does not exist" containerID="b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.210415 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb"} err="failed to get container status \"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb\": rpc error: code = NotFound desc = could not find container \"b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb\": container with ID starting with b2f12931ef7797eac5d157b9c5f74cea60eb664c3d141b9d09cf7fe30ae5c1fb not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.210457 5136 scope.go:117] "RemoveContainer" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" Mar 20 07:13:26 crc kubenswrapper[5136]: E0320 07:13:26.210782 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5\": container with ID starting with 5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5 not found: ID does not exist" containerID="5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.210859 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5"} err="failed to get container status \"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5\": rpc error: code = NotFound desc = could not find container \"5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5\": container with ID starting with 5d436980ce14d40aef3f47b677d0bc5908f793eaa1792108e00dd687af9bd7f5 not found: ID does not exist" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235166 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235281 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235322 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235478 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.235571 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.337751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.338115 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.339269 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.339455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.339915 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.340208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.340293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.338382 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.340796 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.342098 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.344251 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.345446 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.346427 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.359634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"ceilometer-0\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.413591 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="970a5b8d-94f8-4638-b351-40867c27568a" path="/var/lib/kubelet/pods/970a5b8d-94f8-4638-b351-40867c27568a/volumes" Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.462730 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:26 crc kubenswrapper[5136]: W0320 07:13:26.946989 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe945390_82b0_4512_8028_a0207cd7796b.slice/crio-5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b WatchSource:0}: Error finding container 5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b: Status 404 returned error can't find the container with id 5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b Mar 20 07:13:26 crc kubenswrapper[5136]: I0320 07:13:26.948755 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:27 crc kubenswrapper[5136]: I0320 07:13:27.089023 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b"} Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.105059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc"} Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.127895 5136 generic.go:334] "Generic (PLEG): container finished" podID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerID="0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d" exitCode=0 Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.127935 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerDied","Data":"0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d"} Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.211954 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277489 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277553 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277589 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277687 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.277711 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f7a82425-91b7-43b8-b26e-ace42be9cdba\" (UID: \"f7a82425-91b7-43b8-b26e-ace42be9cdba\") " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.282392 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs" (OuterVolumeSpecName: "logs") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.284584 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.285464 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts" (OuterVolumeSpecName: "scripts") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.287176 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.289565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz" (OuterVolumeSpecName: "kube-api-access-zrqvz") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "kube-api-access-zrqvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.309026 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.369275 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data" (OuterVolumeSpecName: "config-data") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.378150 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f7a82425-91b7-43b8-b26e-ace42be9cdba" (UID: "f7a82425-91b7-43b8-b26e-ace42be9cdba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379536 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrqvz\" (UniqueName: \"kubernetes.io/projected/f7a82425-91b7-43b8-b26e-ace42be9cdba-kube-api-access-zrqvz\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379556 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379565 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379574 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379582 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379591 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a82425-91b7-43b8-b26e-ace42be9cdba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379598 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f7a82425-91b7-43b8-b26e-ace42be9cdba-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.379626 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.412784 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 07:13:28 crc kubenswrapper[5136]: I0320 07:13:28.481487 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.037139 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:46442->10.217.0.152:9292: read: connection reset by peer" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.037149 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.152:9292/healthcheck\": read tcp 10.217.0.2:46436->10.217.0.152:9292: read: connection reset by peer" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.138146 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c"} Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.150347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"f7a82425-91b7-43b8-b26e-ace42be9cdba","Type":"ContainerDied","Data":"6c6e6f554a2daf084995a53a820c6c15f7723c013708ede47a5e369f225ae2a2"} Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.150397 5136 scope.go:117] "RemoveContainer" containerID="0955f2ff6e58a181eb4657826df44412140ece0a092f87584721929f1c23cd5d" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.150405 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.201748 5136 scope.go:117] "RemoveContainer" containerID="0d813176fbff380f2ecf1396ef58dbd6653c9f7fc00b3a5aa2671557b6efffb1" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.238174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.246487 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.252899 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: E0320 07:13:29.253231 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253244 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" Mar 20 07:13:29 crc kubenswrapper[5136]: E0320 07:13:29.253268 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253273 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253450 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-log" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.253467 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" containerName="glance-httpd" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.254269 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.287589 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.287863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.294721 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397860 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397897 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397935 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.397985 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398013 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.398108 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499695 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499730 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499777 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499823 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499844 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.499883 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.501789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.501835 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.502163 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.520280 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.520885 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.521408 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.521473 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.524971 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.579606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-0\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.610077 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.730260 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806081 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806123 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806160 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806268 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806393 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.806425 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") pod \"5249fb5b-8908-4b21-9ea3-28508854ce4a\" (UID: \"5249fb5b-8908-4b21-9ea3-28508854ce4a\") " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.807292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs" (OuterVolumeSpecName: "logs") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.807565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.816432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m" (OuterVolumeSpecName: "kube-api-access-qx28m") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "kube-api-access-qx28m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.819246 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.819294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts" (OuterVolumeSpecName: "scripts") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.857468 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.878707 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data" (OuterVolumeSpecName: "config-data") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.909956 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.909987 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5249fb5b-8908-4b21-9ea3-28508854ce4a-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910017 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910026 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910039 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx28m\" (UniqueName: \"kubernetes.io/projected/5249fb5b-8908-4b21-9ea3-28508854ce4a-kube-api-access-qx28m\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910047 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.910056 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.914258 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5249fb5b-8908-4b21-9ea3-28508854ce4a" (UID: "5249fb5b-8908-4b21-9ea3-28508854ce4a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:29 crc kubenswrapper[5136]: I0320 07:13:29.942201 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.012528 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.012565 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5249fb5b-8908-4b21-9ea3-28508854ce4a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.164945 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb"} Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168183 5136 generic.go:334] "Generic (PLEG): container finished" podID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" exitCode=0 Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168229 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerDied","Data":"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16"} Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168252 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5249fb5b-8908-4b21-9ea3-28508854ce4a","Type":"ContainerDied","Data":"c7adb5a2a9c06da1244ea87915af7e6cfa3d0ce95887a1e45fa3f12d10155ea2"} Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168273 5136 scope.go:117] "RemoveContainer" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.168302 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.201937 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.219362 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238295 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.238694 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238707 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.238722 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238727 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238933 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-httpd" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.238954 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" containerName="glance-log" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.240012 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.241988 5136 scope.go:117] "RemoveContainer" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.242405 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.242613 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.284713 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293116 5136 scope.go:117] "RemoveContainer" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.293585 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16\": container with ID starting with 7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16 not found: ID does not exist" containerID="7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293617 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16"} err="failed to get container status \"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16\": rpc error: code = NotFound desc = could not find container \"7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16\": container with ID starting with 7cf9a9b5f81baf8817adb933425de6d8ef8fbbd11c60012bf91c2922f7222f16 not found: ID does not exist" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293642 5136 scope.go:117] "RemoveContainer" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" Mar 20 07:13:30 crc kubenswrapper[5136]: E0320 07:13:30.293857 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12\": container with ID starting with 2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12 not found: ID does not exist" containerID="2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.293880 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12"} err="failed to get container status \"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12\": rpc error: code = NotFound desc = could not find container \"2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12\": container with ID starting with 2e7ea895afe37d08e9fbef22c08d1dd0be01aa9e1e4c60d8c5a83cf678e40e12 not found: ID does not exist" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.307973 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:13:30 crc kubenswrapper[5136]: W0320 07:13:30.308359 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe20adf9_d6e2_4487_a176_32ddd55eb051.slice/crio-9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb WatchSource:0}: Error finding container 9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb: Status 404 returned error can't find the container with id 9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319797 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319887 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319919 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319956 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.319993 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.320035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.320082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.320142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.410351 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5249fb5b-8908-4b21-9ea3-28508854ce4a" path="/var/lib/kubelet/pods/5249fb5b-8908-4b21-9ea3-28508854ce4a/volumes" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.411183 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a82425-91b7-43b8-b26e-ace42be9cdba" path="/var/lib/kubelet/pods/f7a82425-91b7-43b8-b26e-ace42be9cdba/volumes" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423176 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423422 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423646 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423697 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423772 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.423914 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.424550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.429394 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.429432 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.435047 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.435446 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.442454 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.452929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-internal-api-0\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.573993 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.678783 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.679883 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.682522 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.682523 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w7tkw" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.682678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.694646 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729512 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729757 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729870 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.729925 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833241 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.833297 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.837018 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.837251 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.843534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.855494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"nova-cell0-conductor-db-sync-rzgpn\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:30 crc kubenswrapper[5136]: I0320 07:13:30.998214 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.191575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerStarted","Data":"73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554"} Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.192518 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.207280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerStarted","Data":"ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612"} Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.207329 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerStarted","Data":"9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb"} Mar 20 07:13:31 crc kubenswrapper[5136]: W0320 07:13:31.220893 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod141e5942_2bf9_424c_a6a7_7c93afdad7dc.slice/crio-05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd WatchSource:0}: Error finding container 05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd: Status 404 returned error can't find the container with id 05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.227329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.228588 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.266481444 podStartE2EDuration="5.228573095s" podCreationTimestamp="2026-03-20 07:13:26 +0000 UTC" firstStartedPulling="2026-03-20 07:13:26.949037385 +0000 UTC m=+1439.208348546" lastFinishedPulling="2026-03-20 07:13:30.911129046 +0000 UTC m=+1443.170440197" observedRunningTime="2026-03-20 07:13:31.214786856 +0000 UTC m=+1443.474098007" watchObservedRunningTime="2026-03-20 07:13:31.228573095 +0000 UTC m=+1443.487884246" Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.444409 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:31 crc kubenswrapper[5136]: I0320 07:13:31.487985 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:13:31 crc kubenswrapper[5136]: W0320 07:13:31.508269 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e901a54_c442_45fd_a0d8_1568f850efb4.slice/crio-811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652 WatchSource:0}: Error finding container 811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652: Status 404 returned error can't find the container with id 811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652 Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.223300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerStarted","Data":"662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.223997 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerStarted","Data":"05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.226373 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerStarted","Data":"811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.231492 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerStarted","Data":"08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6"} Mar 20 07:13:32 crc kubenswrapper[5136]: I0320 07:13:32.269068 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.269047884 podStartE2EDuration="3.269047884s" podCreationTimestamp="2026-03-20 07:13:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:32.249224507 +0000 UTC m=+1444.508535658" watchObservedRunningTime="2026-03-20 07:13:32.269047884 +0000 UTC m=+1444.528359055" Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.245511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerStarted","Data":"3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd"} Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246490 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" containerID="cri-o://b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246601 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" containerID="cri-o://73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246648 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" containerID="cri-o://4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.246689 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" containerID="cri-o://34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c" gracePeriod=30 Mar 20 07:13:33 crc kubenswrapper[5136]: I0320 07:13:33.285550 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.285534187 podStartE2EDuration="3.285534187s" podCreationTimestamp="2026-03-20 07:13:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:33.271327555 +0000 UTC m=+1445.530638706" watchObservedRunningTime="2026-03-20 07:13:33.285534187 +0000 UTC m=+1445.544845328" Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.255937 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554" exitCode=0 Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.255972 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb" exitCode=2 Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.255982 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c" exitCode=0 Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.256175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554"} Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.256214 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb"} Mar 20 07:13:34 crc kubenswrapper[5136]: I0320 07:13:34.256228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c"} Mar 20 07:13:37 crc kubenswrapper[5136]: I0320 07:13:37.295043 5136 generic.go:334] "Generic (PLEG): container finished" podID="be945390-82b0-4512-8028-a0207cd7796b" containerID="b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc" exitCode=0 Mar 20 07:13:37 crc kubenswrapper[5136]: I0320 07:13:37.295101 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc"} Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.941174 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996310 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.996487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997085 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") pod \"be945390-82b0-4512-8028-a0207cd7796b\" (UID: \"be945390-82b0-4512-8028-a0207cd7796b\") " Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997112 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997137 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997434 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:38 crc kubenswrapper[5136]: I0320 07:13:38.997456 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/be945390-82b0-4512-8028-a0207cd7796b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.002237 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts" (OuterVolumeSpecName: "scripts") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.004943 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6" (OuterVolumeSpecName: "kube-api-access-xjcz6") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "kube-api-access-xjcz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.037057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.069022 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.088906 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data" (OuterVolumeSpecName: "config-data") pod "be945390-82b0-4512-8028-a0207cd7796b" (UID: "be945390-82b0-4512-8028-a0207cd7796b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098218 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098250 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098263 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjcz6\" (UniqueName: \"kubernetes.io/projected/be945390-82b0-4512-8028-a0207cd7796b-kube-api-access-xjcz6\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098271 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.098279 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/be945390-82b0-4512-8028-a0207cd7796b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.315653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"be945390-82b0-4512-8028-a0207cd7796b","Type":"ContainerDied","Data":"5cb6cc418050d60bffd563fe7ea3892f2ad06634082325336f1fe95bdace7d2b"} Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.316042 5136 scope.go:117] "RemoveContainer" containerID="73dd53cb555a4926da6900afb36bb4071de4ca4b83ec9f1d5c6fe4f78cea9554" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.315928 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.317831 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerStarted","Data":"a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f"} Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.337103 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" podStartSLOduration=2.184762052 podStartE2EDuration="9.337087932s" podCreationTimestamp="2026-03-20 07:13:30 +0000 UTC" firstStartedPulling="2026-03-20 07:13:31.511242932 +0000 UTC m=+1443.770554083" lastFinishedPulling="2026-03-20 07:13:38.663568802 +0000 UTC m=+1450.922879963" observedRunningTime="2026-03-20 07:13:39.332451468 +0000 UTC m=+1451.591762639" watchObservedRunningTime="2026-03-20 07:13:39.337087932 +0000 UTC m=+1451.596399073" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.344464 5136 scope.go:117] "RemoveContainer" containerID="4dd1fda5ac18c23a32359ccddfe66ce995d3d62967fe3ba4801e95a4970c56fb" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.357199 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.373443 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383012 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383352 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383369 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383379 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383386 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383403 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383409 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" Mar 20 07:13:39 crc kubenswrapper[5136]: E0320 07:13:39.383418 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383425 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383578 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-notification-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383594 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="proxy-httpd" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383609 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="sg-core" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.383623 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="be945390-82b0-4512-8028-a0207cd7796b" containerName="ceilometer-central-agent" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.384026 5136 scope.go:117] "RemoveContainer" containerID="34fa52e053cd85fb4fd84973fd16df7f7008a21e7c7592737d31a09951fc533c" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.385217 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.387431 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.388371 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.399727 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.439931 5136 scope.go:117] "RemoveContainer" containerID="b5b1edae1009c5322668a89756ee1e94a710a5cf94e54c2867151f60883078bc" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529360 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529430 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529461 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529505 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529591 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.529615 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.611700 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.611844 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632087 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632238 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632279 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632387 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632497 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632569 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.632745 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.633392 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.637472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.637695 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.638284 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.651327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.651494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"ceilometer-0\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " pod="openstack/ceilometer-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.652214 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.684031 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 07:13:39 crc kubenswrapper[5136]: I0320 07:13:39.723180 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:40 crc kubenswrapper[5136]: W0320 07:13:40.151879 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8f028e7_076a_4fa6_93de_08842dc040f8.slice/crio-8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c WatchSource:0}: Error finding container 8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c: Status 404 returned error can't find the container with id 8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.154027 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.337065 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c"} Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.337562 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.337587 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.439725 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be945390-82b0-4512-8028-a0207cd7796b" path="/var/lib/kubelet/pods/be945390-82b0-4512-8028-a0207cd7796b/volumes" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.574273 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.574325 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.618553 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:40 crc kubenswrapper[5136]: I0320 07:13:40.627510 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:41 crc kubenswrapper[5136]: I0320 07:13:41.345987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3"} Mar 20 07:13:41 crc kubenswrapper[5136]: I0320 07:13:41.346337 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:41 crc kubenswrapper[5136]: I0320 07:13:41.346356 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.164681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.167329 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.357584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c"} Mar 20 07:13:42 crc kubenswrapper[5136]: I0320 07:13:42.357619 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6"} Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.315423 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.410112 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.410279 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 07:13:43 crc kubenswrapper[5136]: I0320 07:13:43.474568 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374331 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" containerID="cri-o://c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374642 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerStarted","Data":"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132"} Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374695 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374723 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" containerID="cri-o://e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374791 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" containerID="cri-o://42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.374842 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" containerID="cri-o://4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" gracePeriod=30 Mar 20 07:13:44 crc kubenswrapper[5136]: I0320 07:13:44.408655 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.991644474 podStartE2EDuration="5.40864064s" podCreationTimestamp="2026-03-20 07:13:39 +0000 UTC" firstStartedPulling="2026-03-20 07:13:40.15576813 +0000 UTC m=+1452.415079281" lastFinishedPulling="2026-03-20 07:13:43.572764306 +0000 UTC m=+1455.832075447" observedRunningTime="2026-03-20 07:13:44.40450185 +0000 UTC m=+1456.663812991" watchObservedRunningTime="2026-03-20 07:13:44.40864064 +0000 UTC m=+1456.667951781" Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.385908 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" exitCode=0 Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386224 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" exitCode=2 Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386234 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" exitCode=0 Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.385941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132"} Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386263 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c"} Mar 20 07:13:45 crc kubenswrapper[5136]: I0320 07:13:45.386273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6"} Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.080474 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.199952 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200592 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200630 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200704 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.200767 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") pod \"e8f028e7-076a-4fa6-93de-08842dc040f8\" (UID: \"e8f028e7-076a-4fa6-93de-08842dc040f8\") " Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.201522 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.202136 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.205883 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4" (OuterVolumeSpecName: "kube-api-access-lfnb4") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "kube-api-access-lfnb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.207721 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts" (OuterVolumeSpecName: "scripts") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.243344 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.276613 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.295786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data" (OuterVolumeSpecName: "config-data") pod "e8f028e7-076a-4fa6-93de-08842dc040f8" (UID: "e8f028e7-076a-4fa6-93de-08842dc040f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303302 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303337 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303346 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303356 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303365 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfnb4\" (UniqueName: \"kubernetes.io/projected/e8f028e7-076a-4fa6-93de-08842dc040f8-kube-api-access-lfnb4\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303373 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8f028e7-076a-4fa6-93de-08842dc040f8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.303384 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8f028e7-076a-4fa6-93de-08842dc040f8-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.406937 5136 generic.go:334] "Generic (PLEG): container finished" podID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" exitCode=0 Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.406987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3"} Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.407048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.407070 5136 scope.go:117] "RemoveContainer" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.407054 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8f028e7-076a-4fa6-93de-08842dc040f8","Type":"ContainerDied","Data":"8d5e25f452146f28927e0e3f67a50de6aecd3be4f4695a36c4794d1a9ea5eb5c"} Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.439466 5136 scope.go:117] "RemoveContainer" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.452203 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.460263 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.462601 5136 scope.go:117] "RemoveContainer" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.473649 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474042 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474055 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474069 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474076 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474084 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474090 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.474102 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474109 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474278 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-notification-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474294 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="ceilometer-central-agent" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474310 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="proxy-httpd" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.474319 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" containerName="sg-core" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.475777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.477781 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.478197 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.482689 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.507389 5136 scope.go:117] "RemoveContainer" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.527248 5136 scope.go:117] "RemoveContainer" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.528944 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132\": container with ID starting with e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132 not found: ID does not exist" containerID="e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529077 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132"} err="failed to get container status \"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132\": rpc error: code = NotFound desc = could not find container \"e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132\": container with ID starting with e50fe10da6c1532394e56e4ce3689d3fa1a887a33e4beb511b90e866d6850132 not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529177 5136 scope.go:117] "RemoveContainer" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.529707 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c\": container with ID starting with 42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c not found: ID does not exist" containerID="42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529738 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c"} err="failed to get container status \"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c\": rpc error: code = NotFound desc = could not find container \"42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c\": container with ID starting with 42a06b7055535d82c161289659e0975245d6eec36eff09198b591ec8baf0e82c not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.529757 5136 scope.go:117] "RemoveContainer" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.530067 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6\": container with ID starting with 4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6 not found: ID does not exist" containerID="4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.530094 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6"} err="failed to get container status \"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6\": rpc error: code = NotFound desc = could not find container \"4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6\": container with ID starting with 4519c9057a23038db9b83cd0c4880e8ee94bfc46921f3913b6b6b71f7b6348a6 not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.530110 5136 scope.go:117] "RemoveContainer" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" Mar 20 07:13:47 crc kubenswrapper[5136]: E0320 07:13:47.530423 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3\": container with ID starting with c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3 not found: ID does not exist" containerID="c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.530464 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3"} err="failed to get container status \"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3\": rpc error: code = NotFound desc = could not find container \"c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3\": container with ID starting with c8921d468496f173a7bb0fcd353a11b75a7b181162136275554a98bf551c6fc3 not found: ID does not exist" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607319 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607442 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607514 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607539 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607570 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607594 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.607664 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.708953 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709334 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709512 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.709930 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.710166 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.710360 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.710390 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.711081 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.715326 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.716893 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.717295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.719806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.731035 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"ceilometer-0\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " pod="openstack/ceilometer-0" Mar 20 07:13:47 crc kubenswrapper[5136]: I0320 07:13:47.793862 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:13:48 crc kubenswrapper[5136]: W0320 07:13:48.283580 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b351b6a_5365_40cf_9d42_c6d4df7cc48b.slice/crio-27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4 WatchSource:0}: Error finding container 27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4: Status 404 returned error can't find the container with id 27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4 Mar 20 07:13:48 crc kubenswrapper[5136]: I0320 07:13:48.286624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:13:48 crc kubenswrapper[5136]: I0320 07:13:48.415633 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8f028e7-076a-4fa6-93de-08842dc040f8" path="/var/lib/kubelet/pods/e8f028e7-076a-4fa6-93de-08842dc040f8/volumes" Mar 20 07:13:48 crc kubenswrapper[5136]: I0320 07:13:48.422102 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4"} Mar 20 07:13:49 crc kubenswrapper[5136]: I0320 07:13:49.431517 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerID="a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f" exitCode=0 Mar 20 07:13:49 crc kubenswrapper[5136]: I0320 07:13:49.431570 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerDied","Data":"a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f"} Mar 20 07:13:49 crc kubenswrapper[5136]: I0320 07:13:49.434046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b"} Mar 20 07:13:50 crc kubenswrapper[5136]: I0320 07:13:50.920500 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093527 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.093608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") pod \"2e901a54-c442-45fd-a0d8-1568f850efb4\" (UID: \"2e901a54-c442-45fd-a0d8-1568f850efb4\") " Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.106602 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts" (OuterVolumeSpecName: "scripts") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.106811 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6" (OuterVolumeSpecName: "kube-api-access-w4hm6") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "kube-api-access-w4hm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.121630 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data" (OuterVolumeSpecName: "config-data") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.121651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e901a54-c442-45fd-a0d8-1568f850efb4" (UID: "2e901a54-c442-45fd-a0d8-1568f850efb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195549 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195581 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195611 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4hm6\" (UniqueName: \"kubernetes.io/projected/2e901a54-c442-45fd-a0d8-1568f850efb4-kube-api-access-w4hm6\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.195621 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e901a54-c442-45fd-a0d8-1568f850efb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.449728 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" event={"ID":"2e901a54-c442-45fd-a0d8-1568f850efb4","Type":"ContainerDied","Data":"811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652"} Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.449765 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="811d4370e886741548fd5e60fd7b20836471ef1cbd1b71ea5b6fdfdd4634e652" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.449797 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-rzgpn" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.653772 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:13:51 crc kubenswrapper[5136]: E0320 07:13:51.654191 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerName="nova-cell0-conductor-db-sync" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.654213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerName="nova-cell0-conductor-db-sync" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.654404 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" containerName="nova-cell0-conductor-db-sync" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.654987 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.657128 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-w7tkw" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.658107 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.661440 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.803564 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.803623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.803673 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.905565 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.905700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.905736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.909857 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.910451 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.921010 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"nova-cell0-conductor-0\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:51 crc kubenswrapper[5136]: I0320 07:13:51.989410 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:52 crc kubenswrapper[5136]: I0320 07:13:52.406931 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:13:52 crc kubenswrapper[5136]: W0320 07:13:52.412934 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod38885968_65f8_45e9_8e72_7464d5e78b85.slice/crio-1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225 WatchSource:0}: Error finding container 1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225: Status 404 returned error can't find the container with id 1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225 Mar 20 07:13:52 crc kubenswrapper[5136]: I0320 07:13:52.463624 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0"} Mar 20 07:13:52 crc kubenswrapper[5136]: I0320 07:13:52.467557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerStarted","Data":"1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225"} Mar 20 07:13:53 crc kubenswrapper[5136]: I0320 07:13:53.477752 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff"} Mar 20 07:13:53 crc kubenswrapper[5136]: I0320 07:13:53.479663 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerStarted","Data":"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff"} Mar 20 07:13:53 crc kubenswrapper[5136]: I0320 07:13:53.479820 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.495318 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerStarted","Data":"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614"} Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.495710 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.530073 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=3.530048728 podStartE2EDuration="3.530048728s" podCreationTimestamp="2026-03-20 07:13:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:13:53.494642686 +0000 UTC m=+1465.753953877" watchObservedRunningTime="2026-03-20 07:13:54.530048728 +0000 UTC m=+1466.789359889" Mar 20 07:13:54 crc kubenswrapper[5136]: I0320 07:13:54.536896 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.577621578 podStartE2EDuration="7.53687763s" podCreationTimestamp="2026-03-20 07:13:47 +0000 UTC" firstStartedPulling="2026-03-20 07:13:48.285671683 +0000 UTC m=+1460.544982834" lastFinishedPulling="2026-03-20 07:13:54.244927715 +0000 UTC m=+1466.504238886" observedRunningTime="2026-03-20 07:13:54.521273855 +0000 UTC m=+1466.780584996" watchObservedRunningTime="2026-03-20 07:13:54.53687763 +0000 UTC m=+1466.796188791" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.135893 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.137747 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.140229 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.140425 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.141637 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.146844 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.259516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"auto-csr-approver-29566514-w8pvt\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.360901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"auto-csr-approver-29566514-w8pvt\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.380355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"auto-csr-approver-29566514-w8pvt\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.454400 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:00 crc kubenswrapper[5136]: I0320 07:14:00.913410 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:14:01 crc kubenswrapper[5136]: I0320 07:14:01.593961 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" event={"ID":"f034b011-ac81-4ef1-aa8b-39164a6c98ee","Type":"ContainerStarted","Data":"39b50e4e3124f6d2ab50323d1ca1ded11a637bf1992d62e4284a2707b5108ecc"} Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.036442 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.496272 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.497694 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.500207 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.501694 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.513427 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604719 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604795 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604887 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.604922 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.617800 5136 generic.go:334] "Generic (PLEG): container finished" podID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerID="d1609ae90ac31423489405692434f7f762e8aa11262621b19e053461b1226222" exitCode=0 Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.617877 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" event={"ID":"f034b011-ac81-4ef1-aa8b-39164a6c98ee","Type":"ContainerDied","Data":"d1609ae90ac31423489405692434f7f762e8aa11262621b19e053461b1226222"} Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.687978 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.689573 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.694472 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709340 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709468 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709585 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.709686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.718649 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.722608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.740741 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.747577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"nova-cell0-cell-mapping-c5brf\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.804903 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.806397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.809652 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.810884 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.810913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.811016 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.825255 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.859764 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920772 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.920974 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921005 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921031 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921048 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.921281 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.925583 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.935396 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.939250 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.939766 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.946852 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:02 crc kubenswrapper[5136]: I0320 07:14:02.987040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"nova-scheduler-0\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.014212 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.025880 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.025955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.025983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026001 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026033 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026040 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026120 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.026141 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.027284 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.028020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.032260 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.037460 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.051533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.060878 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.062417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.070273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"nova-metadata-0\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133838 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133927 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.133985 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134038 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134086 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134112 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.134592 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.153488 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.154202 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.167876 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.174871 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.186414 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"nova-api-0\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.189164 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.199593 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.235907 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.235964 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.235984 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236008 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236041 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236069 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236105 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236126 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.236145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.292020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.339947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340027 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340090 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340118 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.340254 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.341308 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.344625 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.345111 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.345301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.345901 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.357132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.359112 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.366327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"dnsmasq-dns-7b495b9cc7-zflc2\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.367328 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.533411 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.627540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.637282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerStarted","Data":"189025ea97cd8693552bd7a1200886f635ac7ec738c1a8d12f444054eeb7b139"} Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.869466 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:03 crc kubenswrapper[5136]: I0320 07:14:03.879617 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.014982 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.016336 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.019042 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.019275 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.042989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.093324 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:04 crc kubenswrapper[5136]: W0320 07:14:04.098936 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod470e7cfd_fbbb_467e_8115_05cb5654655c.slice/crio-0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83 WatchSource:0}: Error finding container 0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83: Status 404 returned error can't find the container with id 0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83 Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.106530 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162280 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.162324 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.261974 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263529 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263603 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263680 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.263706 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.271725 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.273471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.276405 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.302166 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"nova-cell1-conductor-db-sync-lhwjx\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.332083 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:04 crc kubenswrapper[5136]: W0320 07:14:04.343937 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e70074_47b9_45a2_8dce_52b29305cdf4.slice/crio-01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895 WatchSource:0}: Error finding container 01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895: Status 404 returned error can't find the container with id 01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895 Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.353307 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.364531 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") pod \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\" (UID: \"f034b011-ac81-4ef1-aa8b-39164a6c98ee\") " Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.371430 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6" (OuterVolumeSpecName: "kube-api-access-6qkg6") pod "f034b011-ac81-4ef1-aa8b-39164a6c98ee" (UID: "f034b011-ac81-4ef1-aa8b-39164a6c98ee"). InnerVolumeSpecName "kube-api-access-6qkg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.467537 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qkg6\" (UniqueName: \"kubernetes.io/projected/f034b011-ac81-4ef1-aa8b-39164a6c98ee-kube-api-access-6qkg6\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.662230 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerStarted","Data":"d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.672233 5136 generic.go:334] "Generic (PLEG): container finished" podID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerID="296caa72bdf067401801dcafde6b349a8fa9a120a15acef2e0b624bdeebcf37a" exitCode=0 Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.672282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerDied","Data":"296caa72bdf067401801dcafde6b349a8fa9a120a15acef2e0b624bdeebcf37a"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.672319 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerStarted","Data":"0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.683746 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-c5brf" podStartSLOduration=2.6837302320000003 podStartE2EDuration="2.683730232s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:04.676720973 +0000 UTC m=+1476.936032124" watchObservedRunningTime="2026-03-20 07:14:04.683730232 +0000 UTC m=+1476.943041373" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.685431 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.685475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566514-w8pvt" event={"ID":"f034b011-ac81-4ef1-aa8b-39164a6c98ee","Type":"ContainerDied","Data":"39b50e4e3124f6d2ab50323d1ca1ded11a637bf1992d62e4284a2707b5108ecc"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.685500 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39b50e4e3124f6d2ab50323d1ca1ded11a637bf1992d62e4284a2707b5108ecc" Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.693700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerStarted","Data":"20bf0d47cfdd7ced2b7aa5c95e4f2ffab3ec012eb224a73fbcf3f8530975a2a3"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.703251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerStarted","Data":"0df1814e7262e95838c178b1e5b10663c640c5fde6adccd96881a36676604504"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.713050 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerStarted","Data":"01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.728165 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerStarted","Data":"a5ca31eef173d869233bc7c356cbb5c4a26b6f89cb34f063ea18470c4f07ae64"} Mar 20 07:14:04 crc kubenswrapper[5136]: I0320 07:14:04.869166 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.356645 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.377473 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566508-v874c"] Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.742031 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerStarted","Data":"a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764"} Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.742124 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.745455 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerStarted","Data":"0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6"} Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.745496 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerStarted","Data":"be299a09fe4b2eaa315d545ec6ad6716116fac41b9ad64f156a3aad3be4b468f"} Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.765499 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" podStartSLOduration=3.765477875 podStartE2EDuration="3.765477875s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:05.756396712 +0000 UTC m=+1478.015707863" watchObservedRunningTime="2026-03-20 07:14:05.765477875 +0000 UTC m=+1478.024789036" Mar 20 07:14:05 crc kubenswrapper[5136]: I0320 07:14:05.782636 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" podStartSLOduration=2.782617419 podStartE2EDuration="2.782617419s" podCreationTimestamp="2026-03-20 07:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:05.773565977 +0000 UTC m=+1478.032877128" watchObservedRunningTime="2026-03-20 07:14:05.782617419 +0000 UTC m=+1478.041928570" Mar 20 07:14:06 crc kubenswrapper[5136]: I0320 07:14:06.411946 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9cf4346-e624-476e-b04c-43b35e0a83cd" path="/var/lib/kubelet/pods/f9cf4346-e624-476e-b04c-43b35e0a83cd/volumes" Mar 20 07:14:06 crc kubenswrapper[5136]: I0320 07:14:06.803622 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:06 crc kubenswrapper[5136]: I0320 07:14:06.838330 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.771664 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerStarted","Data":"5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.772099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerStarted","Data":"57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerStarted","Data":"f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774356 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerStarted","Data":"27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774452 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" containerID="cri-o://27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46" gracePeriod=30 Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.774661 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" containerID="cri-o://f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02" gracePeriod=30 Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.777439 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerStarted","Data":"043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.777507 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926" gracePeriod=30 Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.783045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerStarted","Data":"5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b"} Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.873123 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.86717633 podStartE2EDuration="5.873105455s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:04.1033633 +0000 UTC m=+1476.362674451" lastFinishedPulling="2026-03-20 07:14:07.109292425 +0000 UTC m=+1479.368603576" observedRunningTime="2026-03-20 07:14:07.819079234 +0000 UTC m=+1480.078390405" watchObservedRunningTime="2026-03-20 07:14:07.873105455 +0000 UTC m=+1480.132416606" Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.877157 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.69075244 podStartE2EDuration="5.877146271s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:03.890523797 +0000 UTC m=+1476.149834948" lastFinishedPulling="2026-03-20 07:14:07.076917628 +0000 UTC m=+1479.336228779" observedRunningTime="2026-03-20 07:14:07.871916928 +0000 UTC m=+1480.131228079" watchObservedRunningTime="2026-03-20 07:14:07.877146271 +0000 UTC m=+1480.136457422" Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.904633 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.17566408 podStartE2EDuration="5.904610925s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:04.347905061 +0000 UTC m=+1476.607216212" lastFinishedPulling="2026-03-20 07:14:07.076851906 +0000 UTC m=+1479.336163057" observedRunningTime="2026-03-20 07:14:07.904325406 +0000 UTC m=+1480.163636557" watchObservedRunningTime="2026-03-20 07:14:07.904610925 +0000 UTC m=+1480.163922076" Mar 20 07:14:07 crc kubenswrapper[5136]: I0320 07:14:07.958773 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.773739333 podStartE2EDuration="5.95873285s" podCreationTimestamp="2026-03-20 07:14:02 +0000 UTC" firstStartedPulling="2026-03-20 07:14:03.890238328 +0000 UTC m=+1476.149549479" lastFinishedPulling="2026-03-20 07:14:07.075231845 +0000 UTC m=+1479.334542996" observedRunningTime="2026-03-20 07:14:07.939285065 +0000 UTC m=+1480.198596216" watchObservedRunningTime="2026-03-20 07:14:07.95873285 +0000 UTC m=+1480.218044011" Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.015926 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.632188 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.795099 5136 generic.go:334] "Generic (PLEG): container finished" podID="0898ed98-4947-4790-9e86-f022b20bc330" containerID="27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46" exitCode=143 Mar 20 07:14:08 crc kubenswrapper[5136]: I0320 07:14:08.795171 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerDied","Data":"27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46"} Mar 20 07:14:11 crc kubenswrapper[5136]: I0320 07:14:11.840587 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerID="d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105" exitCode=0 Mar 20 07:14:11 crc kubenswrapper[5136]: I0320 07:14:11.841194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerDied","Data":"d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105"} Mar 20 07:14:12 crc kubenswrapper[5136]: I0320 07:14:12.851009 5136 generic.go:334] "Generic (PLEG): container finished" podID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerID="0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6" exitCode=0 Mar 20 07:14:12 crc kubenswrapper[5136]: I0320 07:14:12.851123 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerDied","Data":"0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.015669 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.060326 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.200370 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.200441 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.210899 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259608 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259872 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.259920 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") pod \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\" (UID: \"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.267245 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts" (OuterVolumeSpecName: "scripts") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.271511 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8" (OuterVolumeSpecName: "kube-api-access-2c4z8") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "kube-api-access-2c4z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.292086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data" (OuterVolumeSpecName: "config-data") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.292340 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" (UID: "bd689ec0-53e3-498c-9bd7-e6c4be0a94ab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.361973 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.362015 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.362030 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.362042 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c4z8\" (UniqueName: \"kubernetes.io/projected/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab-kube-api-access-2c4z8\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.370961 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.440083 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.440337 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" containerID="cri-o://01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582" gracePeriod=10 Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859578 5136 generic.go:334] "Generic (PLEG): container finished" podID="200895ec-fcf9-436d-82d3-c26c198e1485" containerID="01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582" exitCode=0 Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerDied","Data":"01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" event={"ID":"200895ec-fcf9-436d-82d3-c26c198e1485","Type":"ContainerDied","Data":"1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.859661 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1eab5f1e60735841a0d3aa3364f7f1355c8183de8ac566e94a25a08426cc8d" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.861787 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-c5brf" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.861932 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-c5brf" event={"ID":"bd689ec0-53e3-498c-9bd7-e6c4be0a94ab","Type":"ContainerDied","Data":"189025ea97cd8693552bd7a1200886f635ac7ec738c1a8d12f444054eeb7b139"} Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.861959 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189025ea97cd8693552bd7a1200886f635ac7ec738c1a8d12f444054eeb7b139" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.871007 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.938299 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972238 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972353 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972380 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972465 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.972512 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") pod \"200895ec-fcf9-436d-82d3-c26c198e1485\" (UID: \"200895ec-fcf9-436d-82d3-c26c198e1485\") " Mar 20 07:14:13 crc kubenswrapper[5136]: I0320 07:14:13.981983 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh" (OuterVolumeSpecName: "kube-api-access-cq6zh") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "kube-api-access-cq6zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.033132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.053844 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.073903 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074402 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq6zh\" (UniqueName: \"kubernetes.io/projected/200895ec-fcf9-436d-82d3-c26c198e1485-kube-api-access-cq6zh\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074428 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074437 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.074448 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.095416 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.095889 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" containerID="cri-o://57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9" gracePeriod=30 Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.097217 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" containerID="cri-o://5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382" gracePeriod=30 Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.099592 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.115033 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.115183 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.191:8774/\": EOF" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.136210 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config" (OuterVolumeSpecName: "config") pod "200895ec-fcf9-436d-82d3-c26c198e1485" (UID: "200895ec-fcf9-436d-82d3-c26c198e1485"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.177841 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.177873 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/200895ec-fcf9-436d-82d3-c26c198e1485-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.321083 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380233 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380347 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.380432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") pod \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\" (UID: \"3e4cd633-e391-4daa-8d31-f9e05afb5fe9\") " Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.383802 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.385195 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8" (OuterVolumeSpecName: "kube-api-access-nz6q8") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "kube-api-access-nz6q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.413958 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts" (OuterVolumeSpecName: "scripts") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.464292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data" (OuterVolumeSpecName: "config-data") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.470071 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e4cd633-e391-4daa-8d31-f9e05afb5fe9" (UID: "3e4cd633-e391-4daa-8d31-f9e05afb5fe9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482160 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482187 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482198 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nz6q8\" (UniqueName: \"kubernetes.io/projected/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-kube-api-access-nz6q8\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.482207 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e4cd633-e391-4daa-8d31-f9e05afb5fe9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.870262 5136 generic.go:334] "Generic (PLEG): container finished" podID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerID="57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9" exitCode=143 Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.870335 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerDied","Data":"57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9"} Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872140 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8995fbb57-rp89c" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872130 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" event={"ID":"3e4cd633-e391-4daa-8d31-f9e05afb5fe9","Type":"ContainerDied","Data":"be299a09fe4b2eaa315d545ec6ad6716116fac41b9ad64f156a3aad3be4b468f"} Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872194 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be299a09fe4b2eaa315d545ec6ad6716116fac41b9ad64f156a3aad3be4b468f" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.872311 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-lhwjx" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.912810 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.920958 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8995fbb57-rp89c"] Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.967895 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968259 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerName="nova-manage" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968275 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerName="nova-manage" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968288 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerName="nova-cell1-conductor-db-sync" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968294 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerName="nova-cell1-conductor-db-sync" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968314 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="init" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968321 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="init" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968337 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968342 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" Mar 20 07:14:14 crc kubenswrapper[5136]: E0320 07:14:14.968351 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerName="oc" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968357 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerName="oc" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968516 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" containerName="oc" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968533 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" containerName="dnsmasq-dns" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968543 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" containerName="nova-manage" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.968552 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" containerName="nova-cell1-conductor-db-sync" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.969129 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.977079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 07:14:14 crc kubenswrapper[5136]: I0320 07:14:14.977320 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.096765 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.096841 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.096936 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.197947 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.198011 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.198107 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.202330 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.202534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.221916 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"nova-cell1-conductor-0\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.283426 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.750115 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.884753 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerStarted","Data":"d25cabad936d4a8da77263639f37547fcf3ffbbafde65e2d7285a8e382e5513c"} Mar 20 07:14:15 crc kubenswrapper[5136]: I0320 07:14:15.884921 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" containerID="cri-o://5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" gracePeriod=30 Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.407854 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200895ec-fcf9-436d-82d3-c26c198e1485" path="/var/lib/kubelet/pods/200895ec-fcf9-436d-82d3-c26c198e1485/volumes" Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.894969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerStarted","Data":"f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9"} Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.895152 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:16 crc kubenswrapper[5136]: I0320 07:14:16.917919 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.91789646 podStartE2EDuration="2.91789646s" podCreationTimestamp="2026-03-20 07:14:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:16.916238958 +0000 UTC m=+1489.175550119" watchObservedRunningTime="2026-03-20 07:14:16.91789646 +0000 UTC m=+1489.177207631" Mar 20 07:14:17 crc kubenswrapper[5136]: I0320 07:14:17.802220 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.016507 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.017779 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.018910 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.018968 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:18 crc kubenswrapper[5136]: E0320 07:14:18.875626 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64c2e510_4c83_41bf_ae4a_e8cc1dc058f8.slice/crio-5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:14:18 crc kubenswrapper[5136]: I0320 07:14:18.915383 5136 generic.go:334] "Generic (PLEG): container finished" podID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" exitCode=0 Mar 20 07:14:18 crc kubenswrapper[5136]: I0320 07:14:18.915411 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerDied","Data":"5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.192738 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.371246 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") pod \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.371360 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") pod \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.371401 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") pod \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\" (UID: \"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8\") " Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.389142 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth" (OuterVolumeSpecName: "kube-api-access-xpwth") pod "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" (UID: "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8"). InnerVolumeSpecName "kube-api-access-xpwth". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.402961 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" (UID: "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.409018 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data" (OuterVolumeSpecName: "config-data") pod "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" (UID: "64c2e510-4c83-41bf-ae4a-e8cc1dc058f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.474152 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.474204 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.474226 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpwth\" (UniqueName: \"kubernetes.io/projected/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8-kube-api-access-xpwth\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.926107 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"64c2e510-4c83-41bf-ae4a-e8cc1dc058f8","Type":"ContainerDied","Data":"0df1814e7262e95838c178b1e5b10663c640c5fde6adccd96881a36676604504"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.926361 5136 scope.go:117] "RemoveContainer" containerID="5cee24c274fa82fbc9a248515f82aa7aaf0c85708b3024d4d2662c475b5bc62b" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.926159 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932034 5136 generic.go:334] "Generic (PLEG): container finished" podID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerID="5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382" exitCode=0 Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932069 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerDied","Data":"5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4e08e031-e23d-44c4-bbb2-039769dc1e24","Type":"ContainerDied","Data":"a5ca31eef173d869233bc7c356cbb5c4a26b6f89cb34f063ea18470c4f07ae64"} Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.932099 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5ca31eef173d869233bc7c356cbb5c4a26b6f89cb34f063ea18470c4f07ae64" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.933291 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:19 crc kubenswrapper[5136]: I0320 07:14:19.982359 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.002112 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019349 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: E0320 07:14:20.019883 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019905 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" Mar 20 07:14:20 crc kubenswrapper[5136]: E0320 07:14:20.019921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019929 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:20 crc kubenswrapper[5136]: E0320 07:14:20.019967 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.019978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.020186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-log" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.020214 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" containerName="nova-api-api" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.020243 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" containerName="nova-scheduler-scheduler" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.021368 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.023672 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.039460 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085387 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085539 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.085595 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") pod \"4e08e031-e23d-44c4-bbb2-039769dc1e24\" (UID: \"4e08e031-e23d-44c4-bbb2-039769dc1e24\") " Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.086678 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs" (OuterVolumeSpecName: "logs") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.102335 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42" (OuterVolumeSpecName: "kube-api-access-k6p42") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "kube-api-access-k6p42". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.123213 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data" (OuterVolumeSpecName: "config-data") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.139971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4e08e031-e23d-44c4-bbb2-039769dc1e24" (UID: "4e08e031-e23d-44c4-bbb2-039769dc1e24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188725 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188808 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188900 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188912 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6p42\" (UniqueName: \"kubernetes.io/projected/4e08e031-e23d-44c4-bbb2-039769dc1e24-kube-api-access-k6p42\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188921 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e08e031-e23d-44c4-bbb2-039769dc1e24-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.188929 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4e08e031-e23d-44c4-bbb2-039769dc1e24-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.290064 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.290758 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.290789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.294123 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.294575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.307041 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"nova-scheduler-0\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.340435 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.406938 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64c2e510-4c83-41bf-ae4a-e8cc1dc058f8" path="/var/lib/kubelet/pods/64c2e510-4c83-41bf-ae4a-e8cc1dc058f8/volumes" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.819355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.948172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerStarted","Data":"746593fd75baa00dce61da5dddc1f1d308565b21d07137d5d833134ea9410d34"} Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.948205 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.968639 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.975650 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.987146 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.988697 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:20 crc kubenswrapper[5136]: I0320 07:14:20.998160 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.020423 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.102715 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.102833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.102904 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.103012 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.175774 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.175844 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.204498 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.204942 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.205080 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.205530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.206247 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.210041 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.210131 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.226733 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"nova-api-0\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.318064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.560466 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.561087 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" containerID="cri-o://0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323" gracePeriod=30 Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.815165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972421 5136 generic.go:334] "Generic (PLEG): container finished" podID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerID="0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323" exitCode=2 Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972479 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerDied","Data":"0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323"} Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972847 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"cf624d46-ce35-4e7f-b463-4b0eba006ded","Type":"ContainerDied","Data":"344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a"} Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.972869 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="344f02ef25b9534aca8bfe5e6329564a1246ae9de2437ea4000edf51f797f27a" Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.976645 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerStarted","Data":"2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528"} Mar 20 07:14:21 crc kubenswrapper[5136]: I0320 07:14:21.991536 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerStarted","Data":"3ee4cb0a8e431ba536bfd981a33fd323677b0e4a660774422b45fc0cf650bc2a"} Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.012248 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.012228916 podStartE2EDuration="3.012228916s" podCreationTimestamp="2026-03-20 07:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:22.000406047 +0000 UTC m=+1494.259717218" watchObservedRunningTime="2026-03-20 07:14:22.012228916 +0000 UTC m=+1494.271540067" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.020809 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.120873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") pod \"cf624d46-ce35-4e7f-b463-4b0eba006ded\" (UID: \"cf624d46-ce35-4e7f-b463-4b0eba006ded\") " Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.134939 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9" (OuterVolumeSpecName: "kube-api-access-478m9") pod "cf624d46-ce35-4e7f-b463-4b0eba006ded" (UID: "cf624d46-ce35-4e7f-b463-4b0eba006ded"). InnerVolumeSpecName "kube-api-access-478m9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.225740 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-478m9\" (UniqueName: \"kubernetes.io/projected/cf624d46-ce35-4e7f-b463-4b0eba006ded-kube-api-access-478m9\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:22 crc kubenswrapper[5136]: I0320 07:14:22.407104 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e08e031-e23d-44c4-bbb2-039769dc1e24" path="/var/lib/kubelet/pods/4e08e031-e23d-44c4-bbb2-039769dc1e24/volumes" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.002128 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerStarted","Data":"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b"} Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.002187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerStarted","Data":"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077"} Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.002287 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.027046 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.027030786 podStartE2EDuration="3.027030786s" podCreationTimestamp="2026-03-20 07:14:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:23.017637034 +0000 UTC m=+1495.276948205" watchObservedRunningTime="2026-03-20 07:14:23.027030786 +0000 UTC m=+1495.286341937" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.040908 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.056875 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.065540 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: E0320 07:14:23.066004 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.066022 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.066236 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" containerName="kube-state-metrics" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.067002 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.069439 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.070924 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.075731 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.240895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.240970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.241008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.241066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.246601 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.246915 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" containerID="cri-o://5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.247046 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" containerID="cri-o://164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.247120 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" containerID="cri-o://b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.247152 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" containerID="cri-o://066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" gracePeriod=30 Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.342938 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.343034 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.343124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.343208 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.347678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.348313 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.360533 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.361068 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"kube-state-metrics-0\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.385956 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:14:23 crc kubenswrapper[5136]: I0320 07:14:23.862834 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015452 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" exitCode=0 Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015486 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" exitCode=2 Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015499 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" exitCode=0 Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015566 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.015580 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.017421 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerStarted","Data":"634be8a4401daf1087438b6bbc45263ddd43a3a043a4d0fdc9026fb30fdc45cf"} Mar 20 07:14:24 crc kubenswrapper[5136]: I0320 07:14:24.410470 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf624d46-ce35-4e7f-b463-4b0eba006ded" path="/var/lib/kubelet/pods/cf624d46-ce35-4e7f-b463-4b0eba006ded/volumes" Mar 20 07:14:25 crc kubenswrapper[5136]: I0320 07:14:25.333058 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 20 07:14:25 crc kubenswrapper[5136]: I0320 07:14:25.341978 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 07:14:26 crc kubenswrapper[5136]: I0320 07:14:26.039370 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerStarted","Data":"4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849"} Mar 20 07:14:26 crc kubenswrapper[5136]: I0320 07:14:26.039918 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 07:14:26 crc kubenswrapper[5136]: I0320 07:14:26.058099 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.719506257 podStartE2EDuration="3.058073043s" podCreationTimestamp="2026-03-20 07:14:23 +0000 UTC" firstStartedPulling="2026-03-20 07:14:23.867280945 +0000 UTC m=+1496.126592096" lastFinishedPulling="2026-03-20 07:14:25.205847731 +0000 UTC m=+1497.465158882" observedRunningTime="2026-03-20 07:14:26.05639981 +0000 UTC m=+1498.315710971" watchObservedRunningTime="2026-03-20 07:14:26.058073043 +0000 UTC m=+1498.317384224" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.629108 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754315 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754477 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754568 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754587 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754605 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754657 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") pod \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\" (UID: \"0b351b6a-5365-40cf-9d42-c6d4df7cc48b\") " Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.754973 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.755142 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.761031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm" (OuterVolumeSpecName: "kube-api-access-59dlm") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "kube-api-access-59dlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.775305 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts" (OuterVolumeSpecName: "scripts") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.789045 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.851020 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856357 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856471 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856533 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59dlm\" (UniqueName: \"kubernetes.io/projected/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-kube-api-access-59dlm\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856598 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856660 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.856722 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.857672 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data" (OuterVolumeSpecName: "config-data") pod "0b351b6a-5365-40cf-9d42-c6d4df7cc48b" (UID: "0b351b6a-5365-40cf-9d42-c6d4df7cc48b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:28 crc kubenswrapper[5136]: I0320 07:14:28.958201 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b351b6a-5365-40cf-9d42-c6d4df7cc48b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.071473 5136 generic.go:334] "Generic (PLEG): container finished" podID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" exitCode=0 Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.071544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0"} Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.072124 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"0b351b6a-5365-40cf-9d42-c6d4df7cc48b","Type":"ContainerDied","Data":"27ead29dd635d8c64178622abcb2bb11b72e1cbf565e64de2bb239a0882424b4"} Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.071618 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.072173 5136 scope.go:117] "RemoveContainer" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.102775 5136 scope.go:117] "RemoveContainer" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.113558 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.131952 5136 scope.go:117] "RemoveContainer" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.146310 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.155074 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.155721 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.155837 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.155966 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156147 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.156250 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156325 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.156415 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156510 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156902 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="sg-core" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.157046 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-notification-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.157130 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="proxy-httpd" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.157214 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" containerName="ceilometer-central-agent" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.156283 5136 scope.go:117] "RemoveContainer" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.159535 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.162210 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.162607 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.162728 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.164250 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.187881 5136 scope.go:117] "RemoveContainer" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.188376 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614\": container with ID starting with b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614 not found: ID does not exist" containerID="b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188418 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614"} err="failed to get container status \"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614\": rpc error: code = NotFound desc = could not find container \"b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614\": container with ID starting with b062329702f586d3a9a766c56f5cd3aa2ccaa12494e354f1edbba28dcc296614 not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188446 5136 scope.go:117] "RemoveContainer" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.188876 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff\": container with ID starting with 066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff not found: ID does not exist" containerID="066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188949 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff"} err="failed to get container status \"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff\": rpc error: code = NotFound desc = could not find container \"066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff\": container with ID starting with 066edc85102c6a313bf602662dbadf893aa0185134d0512458d22c082f612cff not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.188977 5136 scope.go:117] "RemoveContainer" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.189302 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0\": container with ID starting with 164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0 not found: ID does not exist" containerID="164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.189432 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0"} err="failed to get container status \"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0\": rpc error: code = NotFound desc = could not find container \"164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0\": container with ID starting with 164a94fdc0690960438259e63c68f3f86049df33d0547a7f8b5e7c0c4d905fc0 not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.189637 5136 scope.go:117] "RemoveContainer" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" Mar 20 07:14:29 crc kubenswrapper[5136]: E0320 07:14:29.190039 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b\": container with ID starting with 5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b not found: ID does not exist" containerID="5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.190071 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b"} err="failed to get container status \"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b\": rpc error: code = NotFound desc = could not find container \"5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b\": container with ID starting with 5d01a5abb152de62e0d8931c67065913266b5a9c778d5615edb02e3a5ab4b96b not found: ID does not exist" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.263912 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264229 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264375 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264883 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.264983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367704 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367839 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.367890 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.368916 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.368976 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.369027 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.369111 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.369685 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.370401 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.372020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.374336 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.374527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.375034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.383463 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.387301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"ceilometer-0\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.480539 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:29 crc kubenswrapper[5136]: I0320 07:14:29.916857 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.082671 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"c7959dfd3e0a4c0a66b004e981792b62f5688704717664c451678069db344ce1"} Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.130165 5136 scope.go:117] "RemoveContainer" containerID="dd0acbcfa54abd26f2307cdf5e361341926b1d9e084af4898d114e06b8c54d72" Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.341517 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.367151 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 07:14:30 crc kubenswrapper[5136]: I0320 07:14:30.417649 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b351b6a-5365-40cf-9d42-c6d4df7cc48b" path="/var/lib/kubelet/pods/0b351b6a-5365-40cf-9d42-c6d4df7cc48b/volumes" Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.095403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.125967 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.319472 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:31 crc kubenswrapper[5136]: I0320 07:14:31.319515 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:14:32 crc kubenswrapper[5136]: I0320 07:14:32.105996 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} Mar 20 07:14:32 crc kubenswrapper[5136]: I0320 07:14:32.402004 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:32 crc kubenswrapper[5136]: I0320 07:14:32.402054 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.197:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:33 crc kubenswrapper[5136]: I0320 07:14:33.116768 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} Mar 20 07:14:33 crc kubenswrapper[5136]: I0320 07:14:33.401773 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 07:14:35 crc kubenswrapper[5136]: I0320 07:14:35.136398 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerStarted","Data":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} Mar 20 07:14:35 crc kubenswrapper[5136]: I0320 07:14:35.137199 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:14:38 crc kubenswrapper[5136]: E0320 07:14:38.022561 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0898ed98_4947_4790_9e86_f022b20bc330.slice/crio-f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0898ed98_4947_4790_9e86_f022b20bc330.slice/crio-conmon-f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf8e70074_47b9_45a2_8dce_52b29305cdf4.slice/crio-conmon-043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.183618 5136 generic.go:334] "Generic (PLEG): container finished" podID="0898ed98-4947-4790-9e86-f022b20bc330" containerID="f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02" exitCode=137 Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.183965 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerDied","Data":"f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02"} Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.183990 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0898ed98-4947-4790-9e86-f022b20bc330","Type":"ContainerDied","Data":"20bf0d47cfdd7ced2b7aa5c95e4f2ffab3ec012eb224a73fbcf3f8530975a2a3"} Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.184000 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20bf0d47cfdd7ced2b7aa5c95e4f2ffab3ec012eb224a73fbcf3f8530975a2a3" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.185357 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.376979876 podStartE2EDuration="9.185345854s" podCreationTimestamp="2026-03-20 07:14:29 +0000 UTC" firstStartedPulling="2026-03-20 07:14:29.926213099 +0000 UTC m=+1502.185524240" lastFinishedPulling="2026-03-20 07:14:34.734579057 +0000 UTC m=+1506.993890218" observedRunningTime="2026-03-20 07:14:35.157754595 +0000 UTC m=+1507.417065756" watchObservedRunningTime="2026-03-20 07:14:38.185345854 +0000 UTC m=+1510.444657005" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.185901 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.196871 5136 generic.go:334] "Generic (PLEG): container finished" podID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerID="043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926" exitCode=137 Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.196907 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerDied","Data":"043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926"} Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.198046 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.187713 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.253241 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338699 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338895 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338967 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.338996 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") pod \"0898ed98-4947-4790-9e86-f022b20bc330\" (UID: \"0898ed98-4947-4790-9e86-f022b20bc330\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339314 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs" (OuterVolumeSpecName: "logs") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339537 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339845 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.339920 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0898ed98-4947-4790-9e86-f022b20bc330-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.344773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt" (OuterVolumeSpecName: "kube-api-access-gqkzt") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "kube-api-access-gqkzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.349380 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.379784 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data" (OuterVolumeSpecName: "config-data") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.399109 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0898ed98-4947-4790-9e86-f022b20bc330" (UID: "0898ed98-4947-4790-9e86-f022b20bc330"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441098 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") pod \"f8e70074-47b9-45a2-8dce-52b29305cdf4\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") pod \"f8e70074-47b9-45a2-8dce-52b29305cdf4\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") pod \"f8e70074-47b9-45a2-8dce-52b29305cdf4\" (UID: \"f8e70074-47b9-45a2-8dce-52b29305cdf4\") " Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441661 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441863 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441977 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.441998 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqkzt\" (UniqueName: \"kubernetes.io/projected/0898ed98-4947-4790-9e86-f022b20bc330-kube-api-access-gqkzt\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.442012 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0898ed98-4947-4790-9e86-f022b20bc330-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.442485 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.442993 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.457627 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt" (OuterVolumeSpecName: "kube-api-access-c68nt") pod "f8e70074-47b9-45a2-8dce-52b29305cdf4" (UID: "f8e70074-47b9-45a2-8dce-52b29305cdf4"). InnerVolumeSpecName "kube-api-access-c68nt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.460742 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"redhat-operators-pfj4j\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.466993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data" (OuterVolumeSpecName: "config-data") pod "f8e70074-47b9-45a2-8dce-52b29305cdf4" (UID: "f8e70074-47b9-45a2-8dce-52b29305cdf4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.470913 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f8e70074-47b9-45a2-8dce-52b29305cdf4" (UID: "f8e70074-47b9-45a2-8dce-52b29305cdf4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.544194 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c68nt\" (UniqueName: \"kubernetes.io/projected/f8e70074-47b9-45a2-8dce-52b29305cdf4-kube-api-access-c68nt\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.544270 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.544283 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f8e70074-47b9-45a2-8dce-52b29305cdf4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:38 crc kubenswrapper[5136]: I0320 07:14:38.560549 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.045340 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.208667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerStarted","Data":"14fe667f3db588129ef772a0e1c0daeddde9622aa1ffb9941f75f06fb9c1984f"} Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.212702 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.212785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f8e70074-47b9-45a2-8dce-52b29305cdf4","Type":"ContainerDied","Data":"01468c63a42bfe893531410c532c091f281c2446cd45d74c41e17c5a31b66895"} Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.212853 5136 scope.go:117] "RemoveContainer" containerID="043b1ee99aab1c35a728494e20fbe3b332fbc6142919d4cfb70bd4dd6499e926" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.213358 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.245384 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.268545 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.291598 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: E0320 07:14:39.292054 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292071 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" Mar 20 07:14:39 crc kubenswrapper[5136]: E0320 07:14:39.292090 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292096 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" Mar 20 07:14:39 crc kubenswrapper[5136]: E0320 07:14:39.292128 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292134 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292290 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-log" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292313 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.292324 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0898ed98-4947-4790-9e86-f022b20bc330" containerName="nova-metadata-metadata" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.293214 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.300969 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.301153 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.308136 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.319106 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.319377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.319405 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.328073 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.344000 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.345044 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.351801 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.352047 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.352250 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364106 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.364508 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.365393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467167 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467234 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467275 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467492 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467585 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.467627 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.468273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.473755 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.474161 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.483052 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.488840 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569372 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569446 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.569498 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.573756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.576270 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.576405 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.578237 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.593731 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"nova-cell1-novncproxy-0\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.643742 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:14:39 crc kubenswrapper[5136]: I0320 07:14:39.670139 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.167434 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:14:40 crc kubenswrapper[5136]: W0320 07:14:40.169312 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4622969f_2f2e_42d7_81a6_bc6baa386aec.slice/crio-c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d WatchSource:0}: Error finding container c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d: Status 404 returned error can't find the container with id c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.242341 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" exitCode=0 Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.242966 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17"} Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.248277 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerStarted","Data":"c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d"} Mar 20 07:14:40 crc kubenswrapper[5136]: W0320 07:14:40.250353 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63ab8493_eb78_41d9_b368_bba74dc78166.slice/crio-2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816 WatchSource:0}: Error finding container 2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816: Status 404 returned error can't find the container with id 2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816 Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.251628 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.408123 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0898ed98-4947-4790-9e86-f022b20bc330" path="/var/lib/kubelet/pods/0898ed98-4947-4790-9e86-f022b20bc330/volumes" Mar 20 07:14:40 crc kubenswrapper[5136]: I0320 07:14:40.409114 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8e70074-47b9-45a2-8dce-52b29305cdf4" path="/var/lib/kubelet/pods/f8e70074-47b9-45a2-8dce-52b29305cdf4/volumes" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.262232 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerStarted","Data":"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.262559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerStarted","Data":"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.264269 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerStarted","Data":"a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.264306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerStarted","Data":"2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816"} Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.290299 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.29027075 podStartE2EDuration="2.29027075s" podCreationTimestamp="2026-03-20 07:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:41.285160961 +0000 UTC m=+1513.544472162" watchObservedRunningTime="2026-03-20 07:14:41.29027075 +0000 UTC m=+1513.549581941" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.324058 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.32402538 podStartE2EDuration="2.32402538s" podCreationTimestamp="2026-03-20 07:14:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:41.312944646 +0000 UTC m=+1513.572255797" watchObservedRunningTime="2026-03-20 07:14:41.32402538 +0000 UTC m=+1513.583336561" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.327681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.330741 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:14:41 crc kubenswrapper[5136]: I0320 07:14:41.334467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.280110 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerStarted","Data":"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab"} Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.290285 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.537431 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.539122 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.549491 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668808 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668905 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.668983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.669268 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.669401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771268 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771347 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771385 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771426 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771487 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.771540 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772347 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772424 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772458 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.772587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.790611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"dnsmasq-dns-6bd85b459c-k44rj\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:42 crc kubenswrapper[5136]: I0320 07:14:42.875060 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:43.289544 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" exitCode=0 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:43.289636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:43.499577 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.302076 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerStarted","Data":"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.304010 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerID="cc5a54a6935dd6e523205b586479d84179624ba24df417c663b90589e6d2673f" exitCode=0 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.304058 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerDied","Data":"cc5a54a6935dd6e523205b586479d84179624ba24df417c663b90589e6d2673f"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.304103 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerStarted","Data":"b418e83480ddaf25a5b00d4752775ac00973d088cabf1d27a9b6eceb6bb0b062"} Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.341027 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pfj4j" podStartSLOduration=2.882553372 podStartE2EDuration="6.341005459s" podCreationTimestamp="2026-03-20 07:14:38 +0000 UTC" firstStartedPulling="2026-03-20 07:14:40.243516625 +0000 UTC m=+1512.502827776" lastFinishedPulling="2026-03-20 07:14:43.701968712 +0000 UTC m=+1515.961279863" observedRunningTime="2026-03-20 07:14:44.331951877 +0000 UTC m=+1516.591263028" watchObservedRunningTime="2026-03-20 07:14:44.341005459 +0000 UTC m=+1516.600316610" Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.613650 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614226 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" containerID="cri-o://6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614679 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" containerID="cri-o://fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614748 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" containerID="cri-o://870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.614802 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" containerID="cri-o://21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" gracePeriod=30 Mar 20 07:14:44 crc kubenswrapper[5136]: I0320 07:14:44.670441 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.162560 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.313973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerStarted","Data":"ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051"} Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.315135 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317059 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" exitCode=0 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317080 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" exitCode=2 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317216 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" containerID="cri-o://e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" gracePeriod=30 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317297 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" containerID="cri-o://bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" gracePeriod=30 Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317371 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.317408 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.342712 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" podStartSLOduration=3.342692161 podStartE2EDuration="3.342692161s" podCreationTimestamp="2026-03-20 07:14:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:45.332604238 +0000 UTC m=+1517.591915389" watchObservedRunningTime="2026-03-20 07:14:45.342692161 +0000 UTC m=+1517.602003312" Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.821970 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:14:45 crc kubenswrapper[5136]: I0320 07:14:45.822186 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.230964 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327380 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" exitCode=0 Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327408 5136 generic.go:334] "Generic (PLEG): container finished" podID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" exitCode=0 Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327444 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"96b99c4f-60cb-49ec-a2ba-85c6be21bc19","Type":"ContainerDied","Data":"c7959dfd3e0a4c0a66b004e981792b62f5688704717664c451678069db344ce1"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327491 5136 scope.go:117] "RemoveContainer" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.327605 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.333314 5136 generic.go:334] "Generic (PLEG): container finished" podID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" exitCode=143 Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.333396 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerDied","Data":"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077"} Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345180 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345234 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345364 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345428 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.345517 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") pod \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\" (UID: \"96b99c4f-60cb-49ec-a2ba-85c6be21bc19\") " Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.347281 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.347389 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.347529 5136 scope.go:117] "RemoveContainer" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.353501 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j" (OuterVolumeSpecName: "kube-api-access-4sz9j") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "kube-api-access-4sz9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.354167 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts" (OuterVolumeSpecName: "scripts") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.374616 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.403570 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.424123 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447409 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4sz9j\" (UniqueName: \"kubernetes.io/projected/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-kube-api-access-4sz9j\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447438 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447448 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447458 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447467 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447475 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.447483 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.475257 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data" (OuterVolumeSpecName: "config-data") pod "96b99c4f-60cb-49ec-a2ba-85c6be21bc19" (UID: "96b99c4f-60cb-49ec-a2ba-85c6be21bc19"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.497216 5136 scope.go:117] "RemoveContainer" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.518782 5136 scope.go:117] "RemoveContainer" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.543339 5136 scope.go:117] "RemoveContainer" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.543779 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": container with ID starting with fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959 not found: ID does not exist" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.543827 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} err="failed to get container status \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": rpc error: code = NotFound desc = could not find container \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": container with ID starting with fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.543853 5136 scope.go:117] "RemoveContainer" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.544201 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": container with ID starting with 870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7 not found: ID does not exist" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544226 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} err="failed to get container status \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": rpc error: code = NotFound desc = could not find container \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": container with ID starting with 870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544242 5136 scope.go:117] "RemoveContainer" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.544500 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": container with ID starting with 21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b not found: ID does not exist" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544523 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} err="failed to get container status \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": rpc error: code = NotFound desc = could not find container \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": container with ID starting with 21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544539 5136 scope.go:117] "RemoveContainer" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.544860 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": container with ID starting with 6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104 not found: ID does not exist" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544881 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} err="failed to get container status \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": rpc error: code = NotFound desc = could not find container \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": container with ID starting with 6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.544905 5136 scope.go:117] "RemoveContainer" containerID="fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545212 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959"} err="failed to get container status \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": rpc error: code = NotFound desc = could not find container \"fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959\": container with ID starting with fdbad52cac1967665591e61f31aaef186ea17d997cd68058a773acacd969c959 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545253 5136 scope.go:117] "RemoveContainer" containerID="870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545517 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7"} err="failed to get container status \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": rpc error: code = NotFound desc = could not find container \"870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7\": container with ID starting with 870346b51ef3530ac67d0048e153ea4d80a1cc47563be50aca0bbb8097f396c7 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545545 5136 scope.go:117] "RemoveContainer" containerID="21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545764 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b"} err="failed to get container status \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": rpc error: code = NotFound desc = could not find container \"21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b\": container with ID starting with 21e2189bc7cbcce736f0ee1812e28e634e54cc7a0cab40b804866d4b1a9dc81b not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.545781 5136 scope.go:117] "RemoveContainer" containerID="6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.546020 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104"} err="failed to get container status \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": rpc error: code = NotFound desc = could not find container \"6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104\": container with ID starting with 6bd129913f0c08f53335180396c3ca64b13d14eecafcac159f174d8b4765e104 not found: ID does not exist" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.549422 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96b99c4f-60cb-49ec-a2ba-85c6be21bc19-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.654756 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.666698 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.678743 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679162 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679181 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679194 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679203 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679224 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679232 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" Mar 20 07:14:46 crc kubenswrapper[5136]: E0320 07:14:46.679253 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679260 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679487 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-central-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679515 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="sg-core" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679528 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="ceilometer-notification-agent" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.679550 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" containerName="proxy-httpd" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.681179 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.691991 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.692405 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.692651 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.692773 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853524 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853604 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853684 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.853771 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955554 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955652 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955699 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955730 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955754 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.955833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.956220 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.956791 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.959765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.960104 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.960139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.960249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.962004 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:46 crc kubenswrapper[5136]: I0320 07:14:46.973181 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"ceilometer-0\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " pod="openstack/ceilometer-0" Mar 20 07:14:47 crc kubenswrapper[5136]: I0320 07:14:47.010055 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:14:47 crc kubenswrapper[5136]: I0320 07:14:47.475674 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.354695 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3"} Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.355009 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"a213c0799494e4283f552e4529c929904c7d07c101510facaefb1e2a3e99ab9c"} Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.407097 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b99c4f-60cb-49ec-a2ba-85c6be21bc19" path="/var/lib/kubelet/pods/96b99c4f-60cb-49ec-a2ba-85c6be21bc19/volumes" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.561596 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.561656 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.911983 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.994053 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.995267 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.995324 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:48 crc kubenswrapper[5136]: I0320 07:14:48.995432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") pod \"d0496130-a6c4-42b7-8234-4df60e60ed59\" (UID: \"d0496130-a6c4-42b7-8234-4df60e60ed59\") " Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.001275 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs" (OuterVolumeSpecName: "logs") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.033001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2" (OuterVolumeSpecName: "kube-api-access-gh8c2") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "kube-api-access-gh8c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.041949 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data" (OuterVolumeSpecName: "config-data") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.042963 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0496130-a6c4-42b7-8234-4df60e60ed59" (UID: "d0496130-a6c4-42b7-8234-4df60e60ed59"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098539 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098571 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0496130-a6c4-42b7-8234-4df60e60ed59-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098583 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh8c2\" (UniqueName: \"kubernetes.io/projected/d0496130-a6c4-42b7-8234-4df60e60ed59-kube-api-access-gh8c2\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.098591 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d0496130-a6c4-42b7-8234-4df60e60ed59-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.366292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413"} Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373828 5136 generic.go:334] "Generic (PLEG): container finished" podID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" exitCode=0 Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerDied","Data":"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b"} Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d0496130-a6c4-42b7-8234-4df60e60ed59","Type":"ContainerDied","Data":"3ee4cb0a8e431ba536bfd981a33fd323677b0e4a660774422b45fc0cf650bc2a"} Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373880 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.373920 5136 scope.go:117] "RemoveContainer" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.412667 5136 scope.go:117] "RemoveContainer" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.428874 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.443891 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.445789 5136 scope.go:117] "RemoveContainer" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.449201 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b\": container with ID starting with bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b not found: ID does not exist" containerID="bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.449251 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b"} err="failed to get container status \"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b\": rpc error: code = NotFound desc = could not find container \"bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b\": container with ID starting with bd3c881214627c8c290b619f9198c192246477eeb7aa353a4af430f80586ea8b not found: ID does not exist" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.449279 5136 scope.go:117] "RemoveContainer" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.449728 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077\": container with ID starting with e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077 not found: ID does not exist" containerID="e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.449766 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077"} err="failed to get container status \"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077\": rpc error: code = NotFound desc = could not find container \"e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077\": container with ID starting with e73a3b5f003e5315b12f0018aa1a4f8bd486cbfc3163644d2cc863e537d66077 not found: ID does not exist" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.452851 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.453255 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453267 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" Mar 20 07:14:49 crc kubenswrapper[5136]: E0320 07:14:49.453282 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453288 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453650 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-log" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.453659 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" containerName="nova-api-api" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.454703 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.457170 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.457598 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.457752 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.487944 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.607896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.607950 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608094 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608218 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608262 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.608293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.618646 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pfj4j" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" probeResult="failure" output=< Mar 20 07:14:49 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:14:49 crc kubenswrapper[5136]: > Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.644927 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.644973 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.671382 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709872 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.709920 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.710020 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.715205 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.715317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.727856 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.732984 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.735197 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"nova-api-0\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.875508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:14:49 crc kubenswrapper[5136]: I0320 07:14:49.893126 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.386097 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230"} Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.407397 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0496130-a6c4-42b7-8234-4df60e60ed59" path="/var/lib/kubelet/pods/d0496130-a6c4-42b7-8234-4df60e60ed59/volumes" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.408298 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.432155 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.644222 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.646131 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.651441 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.651630 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.656138 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.657454 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.201:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.660612 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.700632 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.700699 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.700900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.701018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803753 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.803980 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.807744 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.808327 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.808679 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.821586 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"nova-cell1-cell-mapping-9v9kr\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:50 crc kubenswrapper[5136]: I0320 07:14:50.967645 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.412685 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerStarted","Data":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.412964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerStarted","Data":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.412992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerStarted","Data":"5093d7bfe73c985d6844a1757ce3dd059b90dc8cab997d996596bdc9609c38fa"} Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.503751 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.5037259240000003 podStartE2EDuration="2.503725924s" podCreationTimestamp="2026-03-20 07:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:51.430847846 +0000 UTC m=+1523.690159007" watchObservedRunningTime="2026-03-20 07:14:51.503725924 +0000 UTC m=+1523.763037075" Mar 20 07:14:51 crc kubenswrapper[5136]: W0320 07:14:51.510610 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa8bbe04_14be_44c7_8264_0280abbe2023.slice/crio-be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1 WatchSource:0}: Error finding container be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1: Status 404 returned error can't find the container with id be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1 Mar 20 07:14:51 crc kubenswrapper[5136]: I0320 07:14:51.512237 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.422408 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerStarted","Data":"6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d"} Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.422741 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.423924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerStarted","Data":"a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17"} Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.423973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerStarted","Data":"be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1"} Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.446103 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.679692618 podStartE2EDuration="6.446083869s" podCreationTimestamp="2026-03-20 07:14:46 +0000 UTC" firstStartedPulling="2026-03-20 07:14:47.476736143 +0000 UTC m=+1519.736047294" lastFinishedPulling="2026-03-20 07:14:51.243127394 +0000 UTC m=+1523.502438545" observedRunningTime="2026-03-20 07:14:52.443254491 +0000 UTC m=+1524.702565642" watchObservedRunningTime="2026-03-20 07:14:52.446083869 +0000 UTC m=+1524.705395020" Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.458137 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-9v9kr" podStartSLOduration=2.458118904 podStartE2EDuration="2.458118904s" podCreationTimestamp="2026-03-20 07:14:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:14:52.457620059 +0000 UTC m=+1524.716931210" watchObservedRunningTime="2026-03-20 07:14:52.458118904 +0000 UTC m=+1524.717430055" Mar 20 07:14:52 crc kubenswrapper[5136]: I0320 07:14:52.877467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.018466 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.018681 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" containerID="cri-o://a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764" gracePeriod=10 Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.439723 5136 generic.go:334] "Generic (PLEG): container finished" podID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerID="a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764" exitCode=0 Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.440453 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerDied","Data":"a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764"} Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.526489 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.690900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.690975 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691054 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691163 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.691205 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") pod \"470e7cfd-fbbb-467e-8115-05cb5654655c\" (UID: \"470e7cfd-fbbb-467e-8115-05cb5654655c\") " Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.698611 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw" (OuterVolumeSpecName: "kube-api-access-5zhvw") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "kube-api-access-5zhvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.737002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.743229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config" (OuterVolumeSpecName: "config") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.744491 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.747299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.758900 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "470e7cfd-fbbb-467e-8115-05cb5654655c" (UID: "470e7cfd-fbbb-467e-8115-05cb5654655c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793609 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793647 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793660 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793672 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793685 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zhvw\" (UniqueName: \"kubernetes.io/projected/470e7cfd-fbbb-467e-8115-05cb5654655c-kube-api-access-5zhvw\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:53 crc kubenswrapper[5136]: I0320 07:14:53.793697 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/470e7cfd-fbbb-467e-8115-05cb5654655c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.456465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" event={"ID":"470e7cfd-fbbb-467e-8115-05cb5654655c","Type":"ContainerDied","Data":"0815fbb3bd31db22d56e0ee37bf687b5d4a3901815c3e15d641f9bfe006afa83"} Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.456521 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.456833 5136 scope.go:117] "RemoveContainer" containerID="a7d15dbcf3e44927ae943561f1932c75895f70aa4d9c499b6616ad45bc104764" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.490543 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.492949 5136 scope.go:117] "RemoveContainer" containerID="296caa72bdf067401801dcafde6b349a8fa9a120a15acef2e0b624bdeebcf37a" Mar 20 07:14:54 crc kubenswrapper[5136]: I0320 07:14:54.515647 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7b495b9cc7-zflc2"] Mar 20 07:14:56 crc kubenswrapper[5136]: I0320 07:14:56.415456 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" path="/var/lib/kubelet/pods/470e7cfd-fbbb-467e-8115-05cb5654655c/volumes" Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.506281 5136 generic.go:334] "Generic (PLEG): container finished" podID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerID="a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17" exitCode=0 Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.506320 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerDied","Data":"a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17"} Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.644163 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:57 crc kubenswrapper[5136]: I0320 07:14:57.644208 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.370040 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7b495b9cc7-zflc2" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.193:5353: i/o timeout" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.623876 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.690775 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.865957 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:14:58 crc kubenswrapper[5136]: I0320 07:14:58.936343 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.091884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.092051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.092100 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.092174 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") pod \"fa8bbe04-14be-44c7-8264-0280abbe2023\" (UID: \"fa8bbe04-14be-44c7-8264-0280abbe2023\") " Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.097190 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts" (OuterVolumeSpecName: "scripts") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.097436 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl" (OuterVolumeSpecName: "kube-api-access-zsmfl") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "kube-api-access-zsmfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.124496 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.124947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data" (OuterVolumeSpecName: "config-data") pod "fa8bbe04-14be-44c7-8264-0280abbe2023" (UID: "fa8bbe04-14be-44c7-8264-0280abbe2023"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.193982 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.194017 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.194029 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa8bbe04-14be-44c7-8264-0280abbe2023-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.194040 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zsmfl\" (UniqueName: \"kubernetes.io/projected/fa8bbe04-14be-44c7-8264-0280abbe2023-kube-api-access-zsmfl\") on node \"crc\" DevicePath \"\"" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.521588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-9v9kr" event={"ID":"fa8bbe04-14be-44c7-8264-0280abbe2023","Type":"ContainerDied","Data":"be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1"} Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.521627 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be30541db8e671dad79fb4a2f0e4818892fcd3543562fdd78e492e7b8e7041e1" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.521762 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-9v9kr" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.650086 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.651116 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.656535 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.748018 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.748465 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" containerID="cri-o://303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" gracePeriod=30 Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.749002 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" containerID="cri-o://321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" gracePeriod=30 Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.763728 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.764063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" containerID="cri-o://2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" gracePeriod=30 Mar 20 07:14:59 crc kubenswrapper[5136]: I0320 07:14:59.773919 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.163316 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.164792 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.164819 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.164883 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="init" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.164894 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="init" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.164912 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerName="nova-manage" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.164921 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerName="nova-manage" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.165484 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="470e7cfd-fbbb-467e-8115-05cb5654655c" containerName="dnsmasq-dns" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.165537 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" containerName="nova-manage" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.167084 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.175426 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.175653 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.182201 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.313351 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.313731 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.313919 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.320023 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.342687 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.344053 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.349476 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.349562 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.415949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416216 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416352 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416405 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") pod \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\" (UID: \"4c230e4e-a220-4596-8c60-ffd4a7b86cb9\") " Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.416925 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.417013 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.417228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs" (OuterVolumeSpecName: "logs") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.417831 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.421577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.424108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4" (OuterVolumeSpecName: "kube-api-access-766z4") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "kube-api-access-766z4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.435759 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"collect-profiles-29566515-7hn9q\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.447717 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.448391 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data" (OuterVolumeSpecName: "config-data") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.477047 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.483752 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4c230e4e-a220-4596-8c60-ffd4a7b86cb9" (UID: "4c230e4e-a220-4596-8c60-ffd4a7b86cb9"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.500165 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518875 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518912 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518923 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-766z4\" (UniqueName: \"kubernetes.io/projected/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-kube-api-access-766z4\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518934 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518942 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.518950 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c230e4e-a220-4596-8c60-ffd4a7b86cb9-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.533232 5136 generic.go:334] "Generic (PLEG): container finished" podID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" exitCode=0 Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.533262 5136 generic.go:334] "Generic (PLEG): container finished" podID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" exitCode=143 Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.534416 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536683 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerDied","Data":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536731 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerDied","Data":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536745 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"4c230e4e-a220-4596-8c60-ffd4a7b86cb9","Type":"ContainerDied","Data":"5093d7bfe73c985d6844a1757ce3dd059b90dc8cab997d996596bdc9609c38fa"} Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.536764 5136 scope.go:117] "RemoveContainer" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.537056 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pfj4j" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" containerID="cri-o://ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" gracePeriod=2 Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.542727 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.578519 5136 scope.go:117] "RemoveContainer" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.591275 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.608291 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.612202 5136 scope.go:117] "RemoveContainer" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.620004 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": container with ID starting with 321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0 not found: ID does not exist" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620049 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} err="failed to get container status \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": rpc error: code = NotFound desc = could not find container \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": container with ID starting with 321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0 not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620075 5136 scope.go:117] "RemoveContainer" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.620490 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": container with ID starting with 303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e not found: ID does not exist" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620544 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} err="failed to get container status \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": rpc error: code = NotFound desc = could not find container \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": container with ID starting with 303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.620575 5136 scope.go:117] "RemoveContainer" containerID="321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.622843 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.623247 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623259 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" Mar 20 07:15:00 crc kubenswrapper[5136]: E0320 07:15:00.623279 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623284 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623450 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-log" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.623475 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" containerName="nova-api-api" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.624418 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.627059 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0"} err="failed to get container status \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": rpc error: code = NotFound desc = could not find container \"321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0\": container with ID starting with 321e15f8019123fab2b3a362f5ea0e71d7d7aa49cde24181c9233070cab3f9a0 not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.627113 5136 scope.go:117] "RemoveContainer" containerID="303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629270 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e"} err="failed to get container status \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": rpc error: code = NotFound desc = could not find container \"303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e\": container with ID starting with 303dc3dc44eec4b8f1cd2d49f30911a667eb9b6050ed25807513fcdfef13449e not found: ID does not exist" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629426 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629458 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.629786 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.640613 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.722758 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.722926 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.722986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.723028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.723050 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.723101 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824930 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824954 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.824999 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.825049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.829253 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.832722 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.833001 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.833071 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.833535 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.842452 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"nova-api-0\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " pod="openstack/nova-api-0" Mar 20 07:15:00 crc kubenswrapper[5136]: I0320 07:15:00.985398 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.010429 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 07:15:01 crc kubenswrapper[5136]: W0320 07:15:01.047083 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6f40568b_2bbc_4d1e_b089_6e08e1eede4b.slice/crio-ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0 WatchSource:0}: Error finding container ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0: Status 404 returned error can't find the container with id ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.232206 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.333417 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") pod \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.333809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") pod \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.333922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") pod \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\" (UID: \"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7\") " Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.335209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities" (OuterVolumeSpecName: "utilities") pod "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" (UID: "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.339209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5" (OuterVolumeSpecName: "kube-api-access-t5pm5") pod "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" (UID: "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7"). InnerVolumeSpecName "kube-api-access-t5pm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.438155 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5pm5\" (UniqueName: \"kubernetes.io/projected/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-kube-api-access-t5pm5\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.438184 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:01 crc kubenswrapper[5136]: W0320 07:15:01.444853 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9dc2d320_2468_4a45_ba6b_69ea478b5e8c.slice/crio-3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc WatchSource:0}: Error finding container 3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc: Status 404 returned error can't find the container with id 3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.450622 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.481783 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" (UID: "0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.540378 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.542098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerStarted","Data":"3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.543603 5136 generic.go:334] "Generic (PLEG): container finished" podID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerID="9f24a13849a44546b978a1e086eb14881e8d529298f6ffe2023d8ef7f1bdc4c6" exitCode=0 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.543667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" event={"ID":"6f40568b-2bbc-4d1e-b089-6e08e1eede4b","Type":"ContainerDied","Data":"9f24a13849a44546b978a1e086eb14881e8d529298f6ffe2023d8ef7f1bdc4c6"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.543696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" event={"ID":"6f40568b-2bbc-4d1e-b089-6e08e1eede4b","Type":"ContainerStarted","Data":"ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547130 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" exitCode=0 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547189 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfj4j" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547235 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfj4j" event={"ID":"0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7","Type":"ContainerDied","Data":"14fe667f3db588129ef772a0e1c0daeddde9622aa1ffb9941f75f06fb9c1984f"} Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547258 5136 scope.go:117] "RemoveContainer" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547337 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" containerID="cri-o://9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" gracePeriod=30 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.547371 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" containerID="cri-o://5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" gracePeriod=30 Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.576038 5136 scope.go:117] "RemoveContainer" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.589816 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.597602 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pfj4j"] Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.626513 5136 scope.go:117] "RemoveContainer" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.645736 5136 scope.go:117] "RemoveContainer" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" Mar 20 07:15:01 crc kubenswrapper[5136]: E0320 07:15:01.646147 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14\": container with ID starting with ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14 not found: ID does not exist" containerID="ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646201 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14"} err="failed to get container status \"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14\": rpc error: code = NotFound desc = could not find container \"ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14\": container with ID starting with ee39d4c269210fc3aa49da56c3480727ac137c9f6ab6ab45972dc59963ac8f14 not found: ID does not exist" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646228 5136 scope.go:117] "RemoveContainer" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" Mar 20 07:15:01 crc kubenswrapper[5136]: E0320 07:15:01.646558 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab\": container with ID starting with 3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab not found: ID does not exist" containerID="3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646605 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab"} err="failed to get container status \"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab\": rpc error: code = NotFound desc = could not find container \"3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab\": container with ID starting with 3e9029c3cec1986cc74604ef2d696316b88fb9ed3d52bbb9a2a7d8c6e94535ab not found: ID does not exist" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.646649 5136 scope.go:117] "RemoveContainer" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" Mar 20 07:15:01 crc kubenswrapper[5136]: E0320 07:15:01.646955 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17\": container with ID starting with 05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17 not found: ID does not exist" containerID="05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17" Mar 20 07:15:01 crc kubenswrapper[5136]: I0320 07:15:01.647009 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17"} err="failed to get container status \"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17\": rpc error: code = NotFound desc = could not find container \"05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17\": container with ID starting with 05599c5c42c8e71a009284e5d500cfedf849e0b8ec367103e0a55059d9a7be17 not found: ID does not exist" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.409151 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" path="/var/lib/kubelet/pods/0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7/volumes" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.410730 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c230e4e-a220-4596-8c60-ffd4a7b86cb9" path="/var/lib/kubelet/pods/4c230e4e-a220-4596-8c60-ffd4a7b86cb9/volumes" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.565620 5136 generic.go:334] "Generic (PLEG): container finished" podID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" exitCode=143 Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.565689 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerDied","Data":"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8"} Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.567885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerStarted","Data":"d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124"} Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.567934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerStarted","Data":"a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a"} Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.604133 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.6041116989999997 podStartE2EDuration="2.604111699s" podCreationTimestamp="2026-03-20 07:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:15:02.592391914 +0000 UTC m=+1534.851703105" watchObservedRunningTime="2026-03-20 07:15:02.604111699 +0000 UTC m=+1534.863422860" Mar 20 07:15:02 crc kubenswrapper[5136]: I0320 07:15:02.959082 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.068783 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") pod \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.069006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") pod \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.069188 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") pod \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\" (UID: \"6f40568b-2bbc-4d1e-b089-6e08e1eede4b\") " Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.069535 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume" (OuterVolumeSpecName: "config-volume") pod "6f40568b-2bbc-4d1e-b089-6e08e1eede4b" (UID: "6f40568b-2bbc-4d1e-b089-6e08e1eede4b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.070029 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.074955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb" (OuterVolumeSpecName: "kube-api-access-rlqcb") pod "6f40568b-2bbc-4d1e-b089-6e08e1eede4b" (UID: "6f40568b-2bbc-4d1e-b089-6e08e1eede4b"). InnerVolumeSpecName "kube-api-access-rlqcb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.075402 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6f40568b-2bbc-4d1e-b089-6e08e1eede4b" (UID: "6f40568b-2bbc-4d1e-b089-6e08e1eede4b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.171735 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlqcb\" (UniqueName: \"kubernetes.io/projected/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-kube-api-access-rlqcb\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.171774 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6f40568b-2bbc-4d1e-b089-6e08e1eede4b-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.578121 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.578162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q" event={"ID":"6f40568b-2bbc-4d1e-b089-6e08e1eede4b","Type":"ContainerDied","Data":"ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0"} Mar 20 07:15:03 crc kubenswrapper[5136]: I0320 07:15:03.578183 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceceab2754596e5b938d5beb3a313d598756f96a0ff399999d5b5f6d7fee3bb0" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588191 5136 generic.go:334] "Generic (PLEG): container finished" podID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" exitCode=0 Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588237 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerDied","Data":"2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528"} Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"6a52f0c9-0dde-48d7-83a3-bb05b1217295","Type":"ContainerDied","Data":"746593fd75baa00dce61da5dddc1f1d308565b21d07137d5d833134ea9410d34"} Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.588542 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="746593fd75baa00dce61da5dddc1f1d308565b21d07137d5d833134ea9410d34" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.656161 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.804730 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") pod \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.805115 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") pod \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.805223 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") pod \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\" (UID: \"6a52f0c9-0dde-48d7-83a3-bb05b1217295\") " Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.819707 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7" (OuterVolumeSpecName: "kube-api-access-shtw7") pod "6a52f0c9-0dde-48d7-83a3-bb05b1217295" (UID: "6a52f0c9-0dde-48d7-83a3-bb05b1217295"). InnerVolumeSpecName "kube-api-access-shtw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.836379 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data" (OuterVolumeSpecName: "config-data") pod "6a52f0c9-0dde-48d7-83a3-bb05b1217295" (UID: "6a52f0c9-0dde-48d7-83a3-bb05b1217295"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.851057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a52f0c9-0dde-48d7-83a3-bb05b1217295" (UID: "6a52f0c9-0dde-48d7-83a3-bb05b1217295"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.907548 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.907585 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a52f0c9-0dde-48d7-83a3-bb05b1217295-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.907598 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shtw7\" (UniqueName: \"kubernetes.io/projected/6a52f0c9-0dde-48d7-83a3-bb05b1217295-kube-api-access-shtw7\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:04 crc kubenswrapper[5136]: I0320 07:15:04.994127 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.110573 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.110756 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.110875 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.111178 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.111230 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") pod \"4622969f-2f2e-42d7-81a6-bc6baa386aec\" (UID: \"4622969f-2f2e-42d7-81a6-bc6baa386aec\") " Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.112300 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs" (OuterVolumeSpecName: "logs") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.113859 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt" (OuterVolumeSpecName: "kube-api-access-lwgkt") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "kube-api-access-lwgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.138259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data" (OuterVolumeSpecName: "config-data") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.146956 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.156812 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "4622969f-2f2e-42d7-81a6-bc6baa386aec" (UID: "4622969f-2f2e-42d7-81a6-bc6baa386aec"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213775 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213811 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwgkt\" (UniqueName: \"kubernetes.io/projected/4622969f-2f2e-42d7-81a6-bc6baa386aec-kube-api-access-lwgkt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213843 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213861 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4622969f-2f2e-42d7-81a6-bc6baa386aec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.213871 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4622969f-2f2e-42d7-81a6-bc6baa386aec-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598701 5136 generic.go:334] "Generic (PLEG): container finished" podID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" exitCode=0 Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598764 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerDied","Data":"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8"} Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598834 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"4622969f-2f2e-42d7-81a6-bc6baa386aec","Type":"ContainerDied","Data":"c57256ebbbe3274d2806f8f41571ca74c969c50a2ce02b08771f1ab16c5bdd1d"} Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598791 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598783 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.598882 5136 scope.go:117] "RemoveContainer" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.621330 5136 scope.go:117] "RemoveContainer" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.640097 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.649361 5136 scope.go:117] "RemoveContainer" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.649764 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8\": container with ID starting with 5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8 not found: ID does not exist" containerID="5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.649803 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8"} err="failed to get container status \"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8\": rpc error: code = NotFound desc = could not find container \"5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8\": container with ID starting with 5f8f148b9c0b744a015c9e2414e24715604e9a0f21fe5a12fd0d7f8b2489eeb8 not found: ID does not exist" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.649836 5136 scope.go:117] "RemoveContainer" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.650031 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8\": container with ID starting with 9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8 not found: ID does not exist" containerID="9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.650052 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8"} err="failed to get container status \"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8\": rpc error: code = NotFound desc = could not find container \"9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8\": container with ID starting with 9e93c2c0c889e2c58126cc5c6de4c2d17f754184e3d07db0df95a7f6d46049a8 not found: ID does not exist" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.656215 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.670326 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.686957 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694513 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694933 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694946 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694957 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694963 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694973 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerName="collect-profiles" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694979 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerName="collect-profiles" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.694990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-utilities" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.694996 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-utilities" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.695006 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695012 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.695030 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-content" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695035 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="extract-content" Mar 20 07:15:05 crc kubenswrapper[5136]: E0320 07:15:05.695053 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695059 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695281 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ecd0151-83ec-4fd4-b2ba-cce2836f8ae7" containerName="registry-server" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695292 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-log" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695303 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" containerName="nova-metadata-metadata" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695314 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" containerName="collect-profiles" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695327 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" containerName="nova-scheduler-scheduler" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.695947 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.707477 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.708252 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.710728 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.712419 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.715589 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.715707 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.724129 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.825996 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826096 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826320 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826553 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.826670 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928080 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928213 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928247 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928350 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.928381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.931442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934300 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.934876 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.935293 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.950570 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"nova-scheduler-0\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " pod="openstack/nova-scheduler-0" Mar 20 07:15:05 crc kubenswrapper[5136]: I0320 07:15:05.952587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"nova-metadata-0\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " pod="openstack/nova-metadata-0" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.033679 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.051988 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.412591 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4622969f-2f2e-42d7-81a6-bc6baa386aec" path="/var/lib/kubelet/pods/4622969f-2f2e-42d7-81a6-bc6baa386aec/volumes" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.413725 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a52f0c9-0dde-48d7-83a3-bb05b1217295" path="/var/lib/kubelet/pods/6a52f0c9-0dde-48d7-83a3-bb05b1217295/volumes" Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.466433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:06 crc kubenswrapper[5136]: W0320 07:15:06.468580 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2e8f54f_5434_4cf0_94b9_38648bf7ba77.slice/crio-cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a WatchSource:0}: Error finding container cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a: Status 404 returned error can't find the container with id cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.564921 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:06 crc kubenswrapper[5136]: W0320 07:15:06.566779 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaf66742a_1452_436f_a22e_7dc277cf690a.slice/crio-5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888 WatchSource:0}: Error finding container 5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888: Status 404 returned error can't find the container with id 5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888 Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.612995 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerStarted","Data":"cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a"} Mar 20 07:15:06 crc kubenswrapper[5136]: I0320 07:15:06.614802 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerStarted","Data":"5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.624470 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerStarted","Data":"f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.624782 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerStarted","Data":"deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.628293 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerStarted","Data":"103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5"} Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.650071 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.650054149 podStartE2EDuration="2.650054149s" podCreationTimestamp="2026-03-20 07:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:15:07.647965154 +0000 UTC m=+1539.907276385" watchObservedRunningTime="2026-03-20 07:15:07.650054149 +0000 UTC m=+1539.909365300" Mar 20 07:15:07 crc kubenswrapper[5136]: I0320 07:15:07.686610 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.686584326 podStartE2EDuration="2.686584326s" podCreationTimestamp="2026-03-20 07:15:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 07:15:07.668467803 +0000 UTC m=+1539.927779014" watchObservedRunningTime="2026-03-20 07:15:07.686584326 +0000 UTC m=+1539.945895517" Mar 20 07:15:10 crc kubenswrapper[5136]: I0320 07:15:10.986474 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:15:10 crc kubenswrapper[5136]: I0320 07:15:10.986932 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 07:15:11 crc kubenswrapper[5136]: I0320 07:15:11.034193 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 07:15:11 crc kubenswrapper[5136]: I0320 07:15:11.998057 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:11 crc kubenswrapper[5136]: I0320 07:15:11.998100 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.208:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:15 crc kubenswrapper[5136]: I0320 07:15:15.822288 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:15:15 crc kubenswrapper[5136]: I0320 07:15:15.823153 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.034010 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.053118 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.053179 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.059920 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 07:15:16 crc kubenswrapper[5136]: I0320 07:15:16.764420 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 07:15:17 crc kubenswrapper[5136]: I0320 07:15:17.030023 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 07:15:17 crc kubenswrapper[5136]: I0320 07:15:17.073974 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:17 crc kubenswrapper[5136]: I0320 07:15:17.074339 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.210:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:18 crc kubenswrapper[5136]: I0320 07:15:18.985983 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:15:18 crc kubenswrapper[5136]: I0320 07:15:18.986318 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 07:15:20 crc kubenswrapper[5136]: I0320 07:15:20.996772 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:15:21 crc kubenswrapper[5136]: I0320 07:15:21.000545 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 07:15:21 crc kubenswrapper[5136]: I0320 07:15:21.004778 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:15:21 crc kubenswrapper[5136]: I0320 07:15:21.793745 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 07:15:24 crc kubenswrapper[5136]: I0320 07:15:24.052865 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:15:24 crc kubenswrapper[5136]: I0320 07:15:24.053313 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.057616 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.057985 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.062831 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:15:26 crc kubenswrapper[5136]: I0320 07:15:26.063672 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.822186 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.824184 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.824665 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.825525 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:15:45 crc kubenswrapper[5136]: I0320 07:15:45.825599 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6" gracePeriod=600 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.111562 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6" exitCode=0 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.111888 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6"} Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.111920 5136 scope.go:117] "RemoveContainer" containerID="e88f4329620c5c7ec6c41ba99712e43215e37853afedf89b0a54491b5a4bfe4f" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.175183 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.236898 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5429-account-create-update-kc9f7"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.317750 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.320398 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.325837 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.348175 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.362932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.377857 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.388743 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.388864 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.392408 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-b5fwk"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.418058 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c44ee109-b721-41c2-bc45-8c6097d31402" path="/var/lib/kubelet/pods/c44ee109-b721-41c2-bc45-8c6097d31402/volumes" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.418634 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec1091b0-0c0e-40a9-9131-93d8e912d0af" path="/var/lib/kubelet/pods/ec1091b0-0c0e-40a9-9131-93d8e912d0af/volumes" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.419158 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-zlrc6"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.434614 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.440895 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.454062 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.472206 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.490302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.490433 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.491700 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.513721 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.515033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.517938 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.533895 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.534202 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" containerID="cri-o://ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af" gracePeriod=2 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.557738 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"barbican-5429-account-create-update-54j5b\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.593289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.593523 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.601959 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.613302 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.631558 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.642551 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694622 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694668 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694695 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.694736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.695235 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.738975 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.739270 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" containerID="cri-o://91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" gracePeriod=30 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.739691 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" containerID="cri-o://e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" gracePeriod=30 Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.750435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"root-account-create-update-mzns4\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.762489 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.781658 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.796604 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.797194 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.798648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.812882 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-fdc6-account-create-update-sfc2q"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.855590 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"nova-api-e3bd-account-create-update-gzjnr\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.868959 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.884543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:46 crc kubenswrapper[5136]: I0320 07:15:46.948582 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-ntmkb"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.020957 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.028747 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.028779 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.029077 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="17ad787b-18bc-4afd-840b-2458b494094a" containerName="openstackclient" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.029709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.034757 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.034885 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.036197 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.042575 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.096669 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.106897 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.108303 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.114838 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.117276 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.127081 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.156881 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.157560 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" containerID="cri-o://ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.167231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d"} Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.216924 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217021 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217093 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.217113 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.222960 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.245794 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-lv952"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.258325 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.258676 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" containerID="cri-o://55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.277932 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.302936 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319370 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319460 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319485 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319535 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.319608 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.321069 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.321534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.322082 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.357722 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-llt2h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.396566 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a0f6-account-create-update-c9hl7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.397880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"nova-cell0-0f90-account-create-update-k7zvd\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.397920 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"placement-a0f6-account-create-update-d5xps\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.429395 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"nova-cell1-0423-account-create-update-9sp6w\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.434129 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.466870 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-a033-account-create-update-ww8m7"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.486255 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.494388 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.557017 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" containerID="cri-o://c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.557395 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" containerID="cri-o://7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b" gracePeriod=300 Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.564687 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-bc06-account-create-update-lm56h"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.569571 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.583869 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.598072 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jv7f9"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.608802 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.656748 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.675466 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ldzkm"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.701056 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.736111 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.757003 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-n6cqg"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.778556 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.792264 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-c5brf"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.844246 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.844728 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:48.344709143 +0000 UTC m=+1580.604020294 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.886621 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.902509 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.946493 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 07:15:47 crc kubenswrapper[5136]: E0320 07:15:47.983962 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 07:15:47 crc kubenswrapper[5136]: I0320 07:15:47.987876 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.021398 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: if [ -n "barbican" ]; then Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="barbican" Mar 20 07:15:48 crc kubenswrapper[5136]: else Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:48 crc kubenswrapper[5136]: fi Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:48 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:48 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:48 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:48 crc kubenswrapper[5136]: # support updates Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.029268 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5429-account-create-update-54j5b" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.059770 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.060636 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.083446 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-9v9kr"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.185005 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.185706 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-vr74x" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" containerID="cri-o://c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.200327 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.210862 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-kxk7p"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.210968 5136 generic.go:334] "Generic (PLEG): container finished" podID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" exitCode=2 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.211060 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerDied","Data":"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.216328 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.216596 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" containerID="cri-o://ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051" gracePeriod=10 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218568 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/ovsdbserver-sb/0.log" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218605 5136 generic.go:334] "Generic (PLEG): container finished" podID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerID="55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94" exitCode=2 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218621 5136 generic.go:334] "Generic (PLEG): container finished" podID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerID="7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b" exitCode=143 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218663 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerDied","Data":"55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.218698 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerDied","Data":"7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.244468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-54j5b" event={"ID":"79272887-6a7f-4336-858a-6844ed6e8a37","Type":"ContainerStarted","Data":"7d5996405d499205fc914d73a14603978ef7492a89b03a07f615cb81cd56c34d"} Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.246904 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: if [ -n "barbican" ]; then Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="barbican" Mar 20 07:15:48 crc kubenswrapper[5136]: else Mar 20 07:15:48 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:48 crc kubenswrapper[5136]: fi Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:48 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:48 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:48 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:48 crc kubenswrapper[5136]: # support updates Mar 20 07:15:48 crc kubenswrapper[5136]: Mar 20 07:15:48 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.259979 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.260496 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5429-account-create-update-54j5b" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.274930 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b1461d1-f963-40b0-8cad-a5b2735eedcc/ovsdbserver-nb/0.log" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.275197 5136 generic.go:334] "Generic (PLEG): container finished" podID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerID="ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655" exitCode=2 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.279366 5136 generic.go:334] "Generic (PLEG): container finished" podID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerID="c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a" exitCode=143 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.277156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerDied","Data":"ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.279568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerDied","Data":"c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a"} Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.279602 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-v7xvp"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.381048 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.381153 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:49.381118015 +0000 UTC m=+1581.640429166 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.391906 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.392398 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" containerID="cri-o://83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393324 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" containerID="cri-o://e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393440 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" containerID="cri-o://32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393516 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" containerID="cri-o://34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393587 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" containerID="cri-o://81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393656 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" containerID="cri-o://1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393711 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" containerID="cri-o://9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393754 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" containerID="cri-o://09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.393982 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" containerID="cri-o://c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394068 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" containerID="cri-o://cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394147 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" containerID="cri-o://8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394223 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" containerID="cri-o://2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394291 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" containerID="cri-o://2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.394343 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" containerID="cri-o://ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.396909 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" containerID="cri-o://f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.453744 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16f28a76-f7a5-4980-a693-7bd078f3c128" path="/var/lib/kubelet/pods/16f28a76-f7a5-4980-a693-7bd078f3c128/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.454583 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a1492b7-73df-440c-9246-ae0e3c2e8802" path="/var/lib/kubelet/pods/2a1492b7-73df-440c-9246-ae0e3c2e8802/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.455091 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fc03366-82a1-4e30-a7e8-a06e16a8a14f" path="/var/lib/kubelet/pods/2fc03366-82a1-4e30-a7e8-a06e16a8a14f/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.456805 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5241dc-9fdc-4e75-9924-fb00a2e6119d" path="/var/lib/kubelet/pods/4f5241dc-9fdc-4e75-9924-fb00a2e6119d/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.459670 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52702304-46c3-4028-af56-60e936dea0a9" path="/var/lib/kubelet/pods/52702304-46c3-4028-af56-60e936dea0a9/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.466441 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61300b5b-7c36-4857-a0bf-631bf3cbb001" path="/var/lib/kubelet/pods/61300b5b-7c36-4857-a0bf-631bf3cbb001/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.467104 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fd262d5-bfc7-49ae-908e-709fa9d0f55f" path="/var/lib/kubelet/pods/7fd262d5-bfc7-49ae-908e-709fa9d0f55f/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.467786 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81055905-a498-49a7-917a-2032a292710e" path="/var/lib/kubelet/pods/81055905-a498-49a7-917a-2032a292710e/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.469193 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8c6efdb-3b8c-4123-bfb6-a67cd416fb18" path="/var/lib/kubelet/pods/a8c6efdb-3b8c-4123-bfb6-a67cd416fb18/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.470175 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abe527e6-fcfe-4955-a1c4-b2b63f1e3c60" path="/var/lib/kubelet/pods/abe527e6-fcfe-4955-a1c4-b2b63f1e3c60/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.470675 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd689ec0-53e3-498c-9bd7-e6c4be0a94ab" path="/var/lib/kubelet/pods/bd689ec0-53e3-498c-9bd7-e6c4be0a94ab/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.471469 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccfe42cb-9794-449c-8ad8-54d68bf21607" path="/var/lib/kubelet/pods/ccfe42cb-9794-449c-8ad8-54d68bf21607/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.474092 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b269d7-6c83-46fd-b85c-5d9dba5ccbda" path="/var/lib/kubelet/pods/d2b269d7-6c83-46fd-b85c-5d9dba5ccbda/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.474561 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91601d4-11a0-4327-8f7e-6856df2b4643" path="/var/lib/kubelet/pods/f91601d4-11a0-4327-8f7e-6856df2b4643/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.475039 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa8bbe04-14be-44c7-8264-0280abbe2023" path="/var/lib/kubelet/pods/fa8bbe04-14be-44c7-8264-0280abbe2023/volumes" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.475681 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.478358 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.478572 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" containerID="cri-o://70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.478885 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" containerID="cri-o://46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.486763 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.539626 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qrg9s"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.540957 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" containerID="cri-o://f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.544623 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b1461d1-f963-40b0-8cad-a5b2735eedcc/ovsdbserver-nb/0.log" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.544706 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.609236 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.609956 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" containerID="cri-o://c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.610542 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" containerID="cri-o://f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.652072 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.652550 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" containerID="cri-o://ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.652938 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" containerID="cri-o://08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.684686 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.684945 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc8db4fdb-hpjdg" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" containerID="cri-o://14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.687973 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-dc8db4fdb-hpjdg" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" containerID="cri-o://605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699524 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699586 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699689 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699705 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699761 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.699798 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") pod \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\" (UID: \"8b1461d1-f963-40b0-8cad-a5b2735eedcc\") " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.707377 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.709394 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts" (OuterVolumeSpecName: "scripts") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.709753 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config" (OuterVolumeSpecName: "config") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.718179 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.762096 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7" (OuterVolumeSpecName: "kube-api-access-pjbm7") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "kube-api-access-pjbm7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.792051 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.810654 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.810789 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.810895 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.811247 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjbm7\" (UniqueName: \"kubernetes.io/projected/8b1461d1-f963-40b0-8cad-a5b2735eedcc-kube-api-access-pjbm7\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.811326 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b1461d1-f963-40b0-8cad-a5b2735eedcc-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.846867 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c6tbf"] Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.851290 5136 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 20 07:15:48 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNBridge=br-int Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNRemote=tcp:localhost:6642 Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNEncapType=geneve Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNAvailabilityZones= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ EnableChassisAsGateway=true Mar 20 07:15:48 crc kubenswrapper[5136]: ++ PhysicalNetworks= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNHostName= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 20 07:15:48 crc kubenswrapper[5136]: ++ ovs_dir=/var/lib/openvswitch Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 20 07:15:48 crc kubenswrapper[5136]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + sleep 0.5 Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + cleanup_ovsdb_server_semaphore Mar 20 07:15:48 crc kubenswrapper[5136]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 20 07:15:48 crc kubenswrapper[5136]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-ldp4w" message=< Mar 20 07:15:48 crc kubenswrapper[5136]: Exiting ovsdb-server (5) [ OK ] Mar 20 07:15:48 crc kubenswrapper[5136]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 20 07:15:48 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNBridge=br-int Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNRemote=tcp:localhost:6642 Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNEncapType=geneve Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNAvailabilityZones= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ EnableChassisAsGateway=true Mar 20 07:15:48 crc kubenswrapper[5136]: ++ PhysicalNetworks= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNHostName= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 20 07:15:48 crc kubenswrapper[5136]: ++ ovs_dir=/var/lib/openvswitch Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 20 07:15:48 crc kubenswrapper[5136]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + sleep 0.5 Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + cleanup_ovsdb_server_semaphore Mar 20 07:15:48 crc kubenswrapper[5136]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 20 07:15:48 crc kubenswrapper[5136]: > Mar 20 07:15:48 crc kubenswrapper[5136]: E0320 07:15:48.851371 5136 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 20 07:15:48 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Mar 20 07:15:48 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNBridge=br-int Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNRemote=tcp:localhost:6642 Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNEncapType=geneve Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNAvailabilityZones= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ EnableChassisAsGateway=true Mar 20 07:15:48 crc kubenswrapper[5136]: ++ PhysicalNetworks= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ OVNHostName= Mar 20 07:15:48 crc kubenswrapper[5136]: ++ DB_FILE=/etc/openvswitch/conf.db Mar 20 07:15:48 crc kubenswrapper[5136]: ++ ovs_dir=/var/lib/openvswitch Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Mar 20 07:15:48 crc kubenswrapper[5136]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Mar 20 07:15:48 crc kubenswrapper[5136]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + sleep 0.5 Mar 20 07:15:48 crc kubenswrapper[5136]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Mar 20 07:15:48 crc kubenswrapper[5136]: + cleanup_ovsdb_server_semaphore Mar 20 07:15:48 crc kubenswrapper[5136]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Mar 20 07:15:48 crc kubenswrapper[5136]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Mar 20 07:15:48 crc kubenswrapper[5136]: > pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" containerID="cri-o://5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.851432 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" containerID="cri-o://5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.867107 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.867403 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" containerID="cri-o://662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.867953 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" containerID="cri-o://3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.875186 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.875866 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6ff4f58fb9-7gtff" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" containerID="cri-o://8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.876252 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6ff4f58fb9-7gtff" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" containerID="cri-o://f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.887884 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.888492 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.898978 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-7vvbn"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.908640 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.908903 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" containerID="cri-o://deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.909677 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" containerID="cri-o://f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.913368 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.918578 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.923292 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.941037 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.962887 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.971199 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.971590 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" containerID="cri-o://a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.972007 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" containerID="cri-o://d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124" gracePeriod=30 Mar 20 07:15:48 crc kubenswrapper[5136]: I0320 07:15:48.986374 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.018309 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-xpg98"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.058345 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.066962 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.067243 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-744d6f84fc-bqcsc" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" containerID="cri-o://0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.067699 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-744d6f84fc-bqcsc" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" containerID="cri-o://793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.099419 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.113012 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-bk75j"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.119390 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.125449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "8b1461d1-f963-40b0-8cad-a5b2735eedcc" (UID: "8b1461d1-f963-40b0-8cad-a5b2735eedcc"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.134654 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.139311 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.150456 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.159786 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.159831 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8b1461d1-f963-40b0-8cad-a5b2735eedcc-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.159854 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.170247 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-4vtvh"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.262486 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.262732 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" containerID="cri-o://afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.263141 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" containerID="cri-o://cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.274208 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-4jdnj"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.281917 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.282104 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78df67c79-bqz8t" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" containerID="cri-o://afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.282447 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-78df67c79-bqz8t" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" containerID="cri-o://dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.293491 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294431 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294454 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294461 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294469 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294476 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294482 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294488 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294494 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294500 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294506 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294513 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294518 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294524 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294530 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294611 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294629 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294637 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294645 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294654 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294663 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294671 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.294679 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.296476 5136 generic.go:334] "Generic (PLEG): container finished" podID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerID="662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.296513 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerDied","Data":"662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298326 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_8b1461d1-f963-40b0-8cad-a5b2735eedcc/ovsdbserver-nb/0.log" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298376 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"8b1461d1-f963-40b0-8cad-a5b2735eedcc","Type":"ContainerDied","Data":"11e0a5791b54dfc64b5c868dfb4c7110fa55e59d3ea215d5dd89246b1feeb323"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298397 5136 scope.go:117] "RemoveContainer" containerID="ad85344499cd2c34ea152d61f6efd8d5a2edf8814c85572a74d76108d11d3655" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.298500 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.307940 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.307967 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.317531 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd71646c-cb64-4a01-8076-449c812955d5" containerID="14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.317650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerDied","Data":"14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.318757 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-2sj8m"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.318950 5136 generic.go:334] "Generic (PLEG): container finished" podID="17ad787b-18bc-4afd-840b-2458b494094a" containerID="ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af" exitCode=137 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.322657 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerID="ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.322711 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerDied","Data":"ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.327454 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/ovsdbserver-sb/0.log" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.327508 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a","Type":"ContainerDied","Data":"58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.327529 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58fe8e25256a499fb2de621906997ec654e0364d2ff5f6192f81208051ec80d6" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.329438 5136 generic.go:334] "Generic (PLEG): container finished" podID="76d08c01-d488-4f36-9998-7f074633c7c5" containerID="c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.329477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerDied","Data":"c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.330412 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerStarted","Data":"2b8d445e4425096daf41465721adf2ee58e490471ea6782e4e955f4d28582fd2"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.331298 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.331480 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" containerID="cri-o://4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.331833 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" containerID="cri-o://b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" gracePeriod=30 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.333663 5136 generic.go:334] "Generic (PLEG): container finished" podID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerID="a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.333739 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerDied","Data":"a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339592 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerID="ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerDied","Data":"ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339677 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" event={"ID":"ccc70cce-242d-4c99-8d3f-ddb541904e29","Type":"ContainerDied","Data":"b418e83480ddaf25a5b00d4752775ac00973d088cabf1d27a9b6eceb6bb0b062"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.339688 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b418e83480ddaf25a5b00d4752775ac00973d088cabf1d27a9b6eceb6bb0b062" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.341404 5136 generic.go:334] "Generic (PLEG): container finished" podID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.341437 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.342282 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.342679 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" exitCode=0 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.342716 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerDied","Data":"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343761 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vr74x_0ede60bf-5bc5-4267-9849-9389df070048/openstack-network-exporter/0.log" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343787 5136 generic.go:334] "Generic (PLEG): container finished" podID="0ede60bf-5bc5-4267-9849-9389df070048" containerID="c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690" exitCode=2 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343875 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerDied","Data":"c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343890 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vr74x" event={"ID":"0ede60bf-5bc5-4267-9849-9389df070048","Type":"ContainerDied","Data":"96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2"} Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.343899 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f4d9700a6f20f9648ff8d4f3bad201abaff41477fe15fa2f506bfcba3bded2" Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.345063 5136 generic.go:334] "Generic (PLEG): container finished" podID="af66742a-1452-436f-a22e-7dc277cf690a" containerID="deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8" exitCode=143 Mar 20 07:15:49 crc kubenswrapper[5136]: I0320 07:15:49.345883 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerDied","Data":"deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8"} Mar 20 07:15:49 crc kubenswrapper[5136]: E0320 07:15:49.346694 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:49 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: if [ -n "barbican" ]; then Mar 20 07:15:49 crc kubenswrapper[5136]: GRANT_DATABASE="barbican" Mar 20 07:15:49 crc kubenswrapper[5136]: else Mar 20 07:15:49 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:49 crc kubenswrapper[5136]: fi Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:49 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:49 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:49 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:49 crc kubenswrapper[5136]: # support updates Mar 20 07:15:49 crc kubenswrapper[5136]: Mar 20 07:15:49 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:49 crc kubenswrapper[5136]: E0320 07:15:49.348375 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-5429-account-create-update-54j5b" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.357232 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.372529 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-rzgpn"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.384666 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.384721 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.384932 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" containerID="cri-o://f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.385138 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" containerID="cri-o://9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.392648 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.400120 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-lhwjx"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.412640 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.412868 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.415977 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.419893 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:49.420376 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" containerID="cri-o://103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" gracePeriod=30 Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.199626 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.200030 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:52.200009531 +0000 UTC m=+1584.459320682 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.218167 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.218903 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.224757 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.202:6080/vnc_lite.html\": dial tcp 10.217.0.202:6080: connect: connection refused" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.226910 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.226950 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.232049 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.252183 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" containerID="cri-o://2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44" gracePeriod=29 Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.256967 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.257175 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" containerID="cri-o://ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" gracePeriod=604800 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.269931 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" containerID="cri-o://b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" gracePeriod=604800 Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.291279 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.291340 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.292549 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.293890 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.295047 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.295079 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.321129 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.104:5671: connect: connection refused" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.362969 5136 generic.go:334] "Generic (PLEG): container finished" podID="31adef78-59fe-4327-9586-0c12177c7bb7" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.363045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerDied","Data":"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.367346 5136 generic.go:334] "Generic (PLEG): container finished" podID="63ab8493-eb78-41d9-b368-bba74dc78166" containerID="a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.367417 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerDied","Data":"a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.374801 5136 generic.go:334] "Generic (PLEG): container finished" podID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" exitCode=143 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.374873 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerDied","Data":"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.379750 5136 generic.go:334] "Generic (PLEG): container finished" podID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" exitCode=143 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.379840 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerDied","Data":"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382546 5136 generic.go:334] "Generic (PLEG): container finished" podID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerID="793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382570 5136 generic.go:334] "Generic (PLEG): container finished" podID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerID="0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac" exitCode=0 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerDied","Data":"793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.382630 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerDied","Data":"0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.390150 5136 generic.go:334] "Generic (PLEG): container finished" podID="2a59ab3d-3094-4e10-bbde-44479696f752" containerID="afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997" exitCode=143 Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.390202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerDied","Data":"afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997"} Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.427340 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e901a54-c442-45fd-a0d8-1568f850efb4" path="/var/lib/kubelet/pods/2e901a54-c442-45fd-a0d8-1568f850efb4/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.427864 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e4cd633-e391-4daa-8d31-f9e05afb5fe9" path="/var/lib/kubelet/pods/3e4cd633-e391-4daa-8d31-f9e05afb5fe9/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.428353 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a15871b-0fd2-4db9-a42a-8e822efa35fb" path="/var/lib/kubelet/pods/4a15871b-0fd2-4db9-a42a-8e822efa35fb/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.428869 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f4b546d-a206-4e15-b21b-850ef44aac79" path="/var/lib/kubelet/pods/4f4b546d-a206-4e15-b21b-850ef44aac79/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.429878 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52bcca3a-bd10-425e-bc7f-f78c8c4a0271" path="/var/lib/kubelet/pods/52bcca3a-bd10-425e-bc7f-f78c8c4a0271/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.430408 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="744eb619-4231-474c-a8b2-a37ed7432086" path="/var/lib/kubelet/pods/744eb619-4231-474c-a8b2-a37ed7432086/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.430914 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ba128a-ff3d-42a9-aa76-04e60b3a2cb5" path="/var/lib/kubelet/pods/81ba128a-ff3d-42a9-aa76-04e60b3a2cb5/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.434157 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfbcdb71-4e43-4243-a408-08d69b6d7328" path="/var/lib/kubelet/pods/bfbcdb71-4e43-4243-a408-08d69b6d7328/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.435283 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edb3559d-359a-4add-8216-afb68a19e111" path="/var/lib/kubelet/pods/edb3559d-359a-4add-8216-afb68a19e111/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.438141 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfd9851-96cd-483e-9e66-b1cc255cb3e2" path="/var/lib/kubelet/pods/fdfd9851-96cd-483e-9e66-b1cc255cb3e2/volumes" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.541872 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/ovsdbserver-sb/0.log" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.541985 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.573176 5136 scope.go:117] "RemoveContainer" containerID="c02c0e7e0e6b0a33a002d424a3ac60fcdca9be308ea7764e0da41ae85bb3639a" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.578862 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.585054 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vr74x_0ede60bf-5bc5-4267-9849-9389df070048/openstack-network-exporter/0.log" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.585096 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.591731 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:50 crc kubenswrapper[5136]: W0320 07:15:50.611622 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12a2cdc2_1b05_4bfd_99e9_ce92d81d3af3.slice/crio-c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe WatchSource:0}: Error finding container c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe: Status 404 returned error can't find the container with id c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.622765 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "nova_api" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="nova_api" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.624348 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" podUID="12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.698628 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707114 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707158 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707186 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707252 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707310 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707328 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707359 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707404 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707490 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707514 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707529 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") pod \"ccc70cce-242d-4c99-8d3f-ddb541904e29\" (UID: \"ccc70cce-242d-4c99-8d3f-ddb541904e29\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707607 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") pod \"0ede60bf-5bc5-4267-9849-9389df070048\" (UID: \"0ede60bf-5bc5-4267-9849-9389df070048\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.707639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") pod \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\" (UID: \"f872c575-a357-4b29-b5e8-cf5dbe6f3d7a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.717639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.718442 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.718935 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.719596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config" (OuterVolumeSpecName: "config") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.719683 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv" (OuterVolumeSpecName: "kube-api-access-gnhlv") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "kube-api-access-gnhlv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.719763 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts" (OuterVolumeSpecName: "scripts") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.723366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config" (OuterVolumeSpecName: "config") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.747023 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc" (OuterVolumeSpecName: "kube-api-access-hd6zc") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "kube-api-access-hd6zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.747905 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6fd2bfe2_2220_4617_ac9a_d02f6222cfd0.slice/crio-afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a59ab3d_3094_4e10_bbde_44479696f752.slice/crio-afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4656b3f4_a2bd_4dd9_913c_a4c3a6d6076b.slice/crio-0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5d2085e7_db7e_4655_965c_027d03e474e0.slice/crio-221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b1461d1_f963_40b0_8cad_a5b2735eedcc.slice\": RecentStats: unable to find data in memory cache]" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.751129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk" (OuterVolumeSpecName: "kube-api-access-5bfpk") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "kube-api-access-5bfpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.751467 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.775559 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.809665 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.809795 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.809899 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810103 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") pod \"17ad787b-18bc-4afd-840b-2458b494094a\" (UID: \"17ad787b-18bc-4afd-840b-2458b494094a\") " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810665 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0ede60bf-5bc5-4267-9849-9389df070048-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810683 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810692 5136 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/0ede60bf-5bc5-4267-9849-9389df070048-ovs-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810729 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810741 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hd6zc\" (UniqueName: \"kubernetes.io/projected/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-kube-api-access-hd6zc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810751 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810759 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnhlv\" (UniqueName: \"kubernetes.io/projected/0ede60bf-5bc5-4267-9849-9389df070048-kube-api-access-gnhlv\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810767 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810775 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810800 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bfpk\" (UniqueName: \"kubernetes.io/projected/ccc70cce-242d-4c99-8d3f-ddb541904e29-kube-api-access-5bfpk\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.810833 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.825467 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch" (OuterVolumeSpecName: "kube-api-access-vdzch") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "kube-api-access-vdzch". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.853117 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.859216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.870309 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.882686 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config" (OuterVolumeSpecName: "config") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.888122 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915356 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915399 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915413 5136 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915425 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.915438 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdzch\" (UniqueName: \"kubernetes.io/projected/17ad787b-18bc-4afd-840b-2458b494094a-kube-api-access-vdzch\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.919026 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.929434 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.934009 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.955032 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.984350 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.987586 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "nova_cell1" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell1" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.988374 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "nova_cell0" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell0" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.988711 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" podUID="17669c27-ef49-4ced-a620-ef7394f02110" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.990120 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" podUID="6638ac71-bcca-4dbb-9ec3-d9ef0da336db" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.990736 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.992540 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.994288 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 07:15:50 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb@sha256:4caef2b55e01b9a7ee88a22bc69db1893521a91d95c7ad4c8e593f14f17a5f95,Command:[/bin/sh -c #!/bin/bash Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: if [ -n "placement" ]; then Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="placement" Mar 20 07:15:50 crc kubenswrapper[5136]: else Mar 20 07:15:50 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 07:15:50 crc kubenswrapper[5136]: fi Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 07:15:50 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 07:15:50 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 07:15:50 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 07:15:50 crc kubenswrapper[5136]: # support updates Mar 20 07:15:50 crc kubenswrapper[5136]: Mar 20 07:15:50 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 07:15:50 crc kubenswrapper[5136]: I0320 07:15:50.995413 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" (UID: "f872c575-a357-4b29-b5e8-cf5dbe6f3d7a"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:50 crc kubenswrapper[5136]: E0320 07:15:50.995466 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"placement-db-secret\\\" not found\"" pod="openstack/placement-a0f6-account-create-update-d5xps" podUID="1490877f-a8fa-4bcd-8c33-be84b9b890aa" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.000233 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.001450 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.008144 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.010161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ccc70cce-242d-4c99-8d3f-ddb541904e29" (UID: "ccc70cce-242d-4c99-8d3f-ddb541904e29"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.011687 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "0ede60bf-5bc5-4267-9849-9389df070048" (UID: "0ede60bf-5bc5-4267-9849-9389df070048"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.020630 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") pod \"79272887-6a7f-4336-858a-6844ed6e8a37\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.020933 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") pod \"79272887-6a7f-4336-858a-6844ed6e8a37\" (UID: \"79272887-6a7f-4336-858a-6844ed6e8a37\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "79272887-6a7f-4336-858a-6844ed6e8a37" (UID: "79272887-6a7f-4336-858a-6844ed6e8a37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021595 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021619 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021634 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021644 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/79272887-6a7f-4336-858a-6844ed6e8a37-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021654 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021663 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ede60bf-5bc5-4267-9849-9389df070048-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021674 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021684 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.021695 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ccc70cce-242d-4c99-8d3f-ddb541904e29-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.035121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv" (OuterVolumeSpecName: "kube-api-access-hgnfv") pod "79272887-6a7f-4336-858a-6844ed6e8a37" (UID: "79272887-6a7f-4336-858a-6844ed6e8a37"). InnerVolumeSpecName "kube-api-access-hgnfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.036039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "17ad787b-18bc-4afd-840b-2458b494094a" (UID: "17ad787b-18bc-4afd-840b-2458b494094a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.040370 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.047028 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.048851 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.048999 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123011 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123184 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123220 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123245 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123297 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123408 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123475 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123548 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") pod \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\" (UID: \"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.123606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") pod \"63ab8493-eb78-41d9-b368-bba74dc78166\" (UID: \"63ab8493-eb78-41d9-b368-bba74dc78166\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.124172 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/17ad787b-18bc-4afd-840b-2458b494094a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.124192 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgnfv\" (UniqueName: \"kubernetes.io/projected/79272887-6a7f-4336-858a-6844ed6e8a37-kube-api-access-hgnfv\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.126411 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.126552 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.134014 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt" (OuterVolumeSpecName: "kube-api-access-ld2qt") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "kube-api-access-ld2qt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.135959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.141955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn" (OuterVolumeSpecName: "kube-api-access-gpskn") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "kube-api-access-gpskn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.166199 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.214341 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data" (OuterVolumeSpecName: "config-data") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.216181 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228032 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpskn\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-kube-api-access-gpskn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228070 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228083 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228095 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228108 5136 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228121 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228134 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld2qt\" (UniqueName: \"kubernetes.io/projected/63ab8493-eb78-41d9-b368-bba74dc78166-kube-api-access-ld2qt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.228145 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.260864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.274113 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data" (OuterVolumeSpecName: "config-data") pod "63ab8493-eb78-41d9-b368-bba74dc78166" (UID: "63ab8493-eb78-41d9-b368-bba74dc78166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.317127 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.323948 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329914 5136 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329946 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63ab8493-eb78-41d9-b368-bba74dc78166-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329959 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.329969 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.335990 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" (UID: "4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.400755 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" event={"ID":"6638ac71-bcca-4dbb-9ec3-d9ef0da336db","Type":"ContainerStarted","Data":"85c5ef70686107412e859254f31a559f15711b7d6e9fc5a62fab2055603accd9"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.405356 5136 generic.go:334] "Generic (PLEG): container finished" podID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerID="2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44" exitCode=0 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.405419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerDied","Data":"2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.407081 5136 scope.go:117] "RemoveContainer" containerID="ef5361e0b73e9c41cc23b5ebe9348fce6a363e59e0bc84a305ad44756dd780af" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.407745 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.416962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-d5xps" event={"ID":"1490877f-a8fa-4bcd-8c33-be84b9b890aa","Type":"ContainerStarted","Data":"fedc877299952c2908f9ddcf965bfe8418828d992b0c90e9fc6df145a89c5cf7"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.420513 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d2085e7-db7e-4655-965c-027d03e474e0" containerID="221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b" exitCode=1 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.420580 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerDied","Data":"221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.421003 5136 scope.go:117] "RemoveContainer" containerID="221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.426014 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" event={"ID":"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3","Type":"ContainerStarted","Data":"c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.432102 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.436400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-744d6f84fc-bqcsc" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.436428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-744d6f84fc-bqcsc" event={"ID":"4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b","Type":"ContainerDied","Data":"1d45fa03e9e760b3fecb6f7927ee88ef303052eb3da3a45f5cb31589469d2afb"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.443228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" event={"ID":"17669c27-ef49-4ced-a620-ef7394f02110","Type":"ContainerStarted","Data":"f713db634b57c569428b9818f10efddf9949e7de62c4b479a6b5e91e44342d03"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.468641 5136 scope.go:117] "RemoveContainer" containerID="793f74839b7ab06cf4c132cbd574d3bb47712ec9caa473ebbd084643fcce6f31" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.504538 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"63ab8493-eb78-41d9-b368-bba74dc78166","Type":"ContainerDied","Data":"2da23f701d5f2888a554a42a1e274ecc8ec591c846eec70993fd4a7736e9b816"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.504629 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.517296 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.522947 5136 generic.go:334] "Generic (PLEG): container finished" podID="38885968-65f8-45e9-8e72-7464d5e78b85" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" exitCode=0 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.523121 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerDied","Data":"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.523307 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"38885968-65f8-45e9-8e72-7464d5e78b85","Type":"ContainerDied","Data":"1952d35f8dd4e8ea612b2d2c603f4623b8d407450e957b72f0fa46a725392225"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.526982 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vr74x" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.532690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-5429-account-create-update-54j5b" event={"ID":"79272887-6a7f-4336-858a-6844ed6e8a37","Type":"ContainerDied","Data":"7d5996405d499205fc914d73a14603978ef7492a89b03a07f615cb81cd56c34d"} Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.532776 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-5429-account-create-update-54j5b" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.536143 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bd85b459c-k44rj" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.543620 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.573830 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.625858 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-744d6f84fc-bqcsc"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.639402 5136 scope.go:117] "RemoveContainer" containerID="0693cd2e5dff17721d25b39746a75f4a67d98d5e960d0f7f816b7b0f7c0a7fac" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.647075 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") pod \"38885968-65f8-45e9-8e72-7464d5e78b85\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.647287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") pod \"38885968-65f8-45e9-8e72-7464d5e78b85\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.647369 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") pod \"38885968-65f8-45e9-8e72-7464d5e78b85\" (UID: \"38885968-65f8-45e9-8e72-7464d5e78b85\") " Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.715612 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn" (OuterVolumeSpecName: "kube-api-access-j75vn") pod "38885968-65f8-45e9-8e72-7464d5e78b85" (UID: "38885968-65f8-45e9-8e72-7464d5e78b85"). InnerVolumeSpecName "kube-api-access-j75vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.726173 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.726513 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" containerID="cri-o://0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.727633 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" containerID="cri-o://6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.727708 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" containerID="cri-o://37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.727739 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" containerID="cri-o://cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.750856 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j75vn\" (UniqueName: \"kubernetes.io/projected/38885968-65f8-45e9-8e72-7464d5e78b85-kube-api-access-j75vn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.754780 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.755010 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" containerID="cri-o://4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.770010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38885968-65f8-45e9-8e72-7464d5e78b85" (UID: "38885968-65f8-45e9-8e72-7464d5e78b85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.855479 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.855764 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" containerID="cri-o://184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.873017 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.874796 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.875962 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e762-account-create-update-5vpcp"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.877379 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.886366 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.901607 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-vs5ks"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.915994 5136 scope.go:117] "RemoveContainer" containerID="a4685f240b41a0ca80c93510a938eb41d236fed91b12082fd48b7f0a68f41629" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.923292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data" (OuterVolumeSpecName: "config-data") pod "38885968-65f8-45e9-8e72-7464d5e78b85" (UID: "38885968-65f8-45e9-8e72-7464d5e78b85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.938530 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939378 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939392 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939431 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939438 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939457 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939464 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939481 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939487 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939509 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939515 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939530 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939537 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="mysql-bootstrap" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939564 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="mysql-bootstrap" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939585 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939606 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939613 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939634 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939641 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939660 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="init" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939665 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="init" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939678 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939684 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: E0320 07:15:51.939702 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.939710 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940062 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940077 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" containerName="ovsdbserver-sb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940084 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="ovsdbserver-nb" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940095 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" containerName="dnsmasq-dns" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940113 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-server" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940125 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" containerName="proxy-httpd" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940138 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" containerName="nova-cell0-conductor-conductor" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940148 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ede60bf-5bc5-4267-9849-9389df070048" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940162 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940180 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" containerName="galera" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940190 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" containerName="openstack-network-exporter" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.940965 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.942828 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.953017 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.953326 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-766d94c967-pb9qd" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" containerID="cri-o://55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" gracePeriod=30 Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.974047 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:15:51 crc kubenswrapper[5136]: I0320 07:15:51.982871 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38885968-65f8-45e9-8e72-7464d5e78b85-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.003496 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.021360 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.052002 5136 scope.go:117] "RemoveContainer" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.052589 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-xztql"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.084958 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086392 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086472 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086536 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086629 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086659 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") pod \"210df7e5-1603-40ec-bfa4-7b85525823b3\" (UID: \"210df7e5-1603-40ec-bfa4-7b85525823b3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086807 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.086881 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.090119 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.090441 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.090978 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.092724 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.093133 5136 scope.go:117] "RemoveContainer" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.093687 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.094749 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f" (OuterVolumeSpecName: "kube-api-access-8pv5f") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "kube-api-access-8pv5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.095348 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff\": container with ID starting with 9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff not found: ID does not exist" containerID="9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.095466 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff"} err="failed to get container status \"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff\": rpc error: code = NotFound desc = could not find container \"9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff\": container with ID starting with 9499f021f44b9cfb2d1d15e04c8ee300bce97edc1f58a0cfe173305e3c3b97ff not found: ID does not exist" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.103793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.112726 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.122419 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.122960 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-22gkc operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-e762-account-create-update-l99mm" podUID="d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.130534 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.143156 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-kfc9f"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.151800 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.178118 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.200929 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") pod \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201138 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") pod \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\" (UID: \"1490877f-a8fa-4bcd-8c33-be84b9b890aa\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201881 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201898 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/210df7e5-1603-40ec-bfa4-7b85525823b3-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201908 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201917 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pv5f\" (UniqueName: \"kubernetes.io/projected/210df7e5-1603-40ec-bfa4-7b85525823b3-kube-api-access-8pv5f\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201926 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/210df7e5-1603-40ec-bfa4-7b85525823b3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201933 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.201953 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202212 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202284 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:52.702264158 +0000 UTC m=+1584.961575309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202348 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.202373 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:56.202366351 +0000 UTC m=+1588.461677502 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.202685 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1490877f-a8fa-4bcd-8c33-be84b9b890aa" (UID: "1490877f-a8fa-4bcd-8c33-be84b9b890aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.210382 5136 projected.go:194] Error preparing data for projected volume kube-api-access-22gkc for pod openstack/keystone-e762-account-create-update-l99mm: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.210461 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:52.710428862 +0000 UTC m=+1584.969740013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-22gkc" (UniqueName: "kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.227920 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "210df7e5-1603-40ec-bfa4-7b85525823b3" (UID: "210df7e5-1603-40ec-bfa4-7b85525823b3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.228944 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2" (OuterVolumeSpecName: "kube-api-access-bgbc2") pod "1490877f-a8fa-4bcd-8c33-be84b9b890aa" (UID: "1490877f-a8fa-4bcd-8c33-be84b9b890aa"). InnerVolumeSpecName "kube-api-access-bgbc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.241052 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.300278 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304682 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304711 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgbc2\" (UniqueName: \"kubernetes.io/projected/1490877f-a8fa-4bcd-8c33-be84b9b890aa-kube-api-access-bgbc2\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304722 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/210df7e5-1603-40ec-bfa4-7b85525823b3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.304730 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1490877f-a8fa-4bcd-8c33-be84b9b890aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.318750 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-vr74x"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.346167 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.352946 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" containerID="cri-o://bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" gracePeriod=30 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.362914 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bd85b459c-k44rj"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.378511 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.384541 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-5429-account-create-update-54j5b"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.391111 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.415568 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0954a67c-5522-4338-b9e6-fc1b35b48cdb" path="/var/lib/kubelet/pods/0954a67c-5522-4338-b9e6-fc1b35b48cdb/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.416657 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ede60bf-5bc5-4267-9849-9389df070048" path="/var/lib/kubelet/pods/0ede60bf-5bc5-4267-9849-9389df070048/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.417938 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17ad787b-18bc-4afd-840b-2458b494094a" path="/var/lib/kubelet/pods/17ad787b-18bc-4afd-840b-2458b494094a/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.419438 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b" path="/var/lib/kubelet/pods/4656b3f4-a2bd-4dd9-913c-a4c3a6d6076b/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.420474 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63ab8493-eb78-41d9-b368-bba74dc78166" path="/var/lib/kubelet/pods/63ab8493-eb78-41d9-b368-bba74dc78166/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.421306 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72e22e43-fccc-4ee4-a170-8ff8b9959c1d" path="/var/lib/kubelet/pods/72e22e43-fccc-4ee4-a170-8ff8b9959c1d/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.423741 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79272887-6a7f-4336-858a-6844ed6e8a37" path="/var/lib/kubelet/pods/79272887-6a7f-4336-858a-6844ed6e8a37/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.424434 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e39c5d-af98-44d6-a06d-f31555db758b" path="/var/lib/kubelet/pods/b4e39c5d-af98-44d6-a06d-f31555db758b/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.425575 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e7cfea-b971-447e-a166-20b4827ce7dc" path="/var/lib/kubelet/pods/c7e7cfea-b971-447e-a166-20b4827ce7dc/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.426548 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc70cce-242d-4c99-8d3f-ddb541904e29" path="/var/lib/kubelet/pods/ccc70cce-242d-4c99-8d3f-ddb541904e29/volumes" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.428691 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.564548 5136 generic.go:334] "Generic (PLEG): container finished" podID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerID="d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.564632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerDied","Data":"d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.566289 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-a0f6-account-create-update-d5xps" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.566285 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-a0f6-account-create-update-d5xps" event={"ID":"1490877f-a8fa-4bcd-8c33-be84b9b890aa","Type":"ContainerDied","Data":"fedc877299952c2908f9ddcf965bfe8418828d992b0c90e9fc6df145a89c5cf7"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.569452 5136 generic.go:334] "Generic (PLEG): container finished" podID="af66742a-1452-436f-a22e-7dc277cf690a" containerID="f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.569528 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerDied","Data":"f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.571330 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d2085e7-db7e-4655-965c-027d03e474e0" containerID="7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17" exitCode=1 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.571354 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerDied","Data":"7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.571397 5136 scope.go:117] "RemoveContainer" containerID="221581ba79e5516de8738bb17ad49d51aed46039fd504cffbf7aaf70d3a7d74b" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.574973 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-mzns4" secret="" err="secret \"galera-openstack-dockercfg-7hd6r\" not found" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.575033 5136 scope.go:117] "RemoveContainer" containerID="7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.575641 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-mzns4_openstack(5d2085e7-db7e-4655-965c-027d03e474e0)\"" pod="openstack/root-account-create-update-mzns4" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.589183 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerID="08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.589227 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerDied","Data":"08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6"} Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.612330 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.612387 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts podName:5d2085e7-db7e-4655-965c-027d03e474e0 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:53.112374691 +0000 UTC m=+1585.371685842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts") pod "root-account-create-update-mzns4" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0") : configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617226 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617253 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230" exitCode=2 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617261 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617301 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617328 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.617338 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.620011 5136 generic.go:334] "Generic (PLEG): container finished" podID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerID="3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.620089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerDied","Data":"3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.622933 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"210df7e5-1603-40ec-bfa4-7b85525823b3","Type":"ContainerDied","Data":"8182f12d4de26ad384abd8e2a3a9007acaaad7cd8b7e832cca1481d0c6ef89ef"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.623030 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.640285 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.647008 5136 generic.go:334] "Generic (PLEG): container finished" podID="76d08c01-d488-4f36-9998-7f074633c7c5" containerID="f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c" exitCode=0 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.647081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerDied","Data":"f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690114 5136 generic.go:334] "Generic (PLEG): container finished" podID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerID="4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849" exitCode=2 Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690158 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerDied","Data":"4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690182 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690204 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"c17493c5-d958-46ab-8e02-d190b2fa6944","Type":"ContainerDied","Data":"634be8a4401daf1087438b6bbc45263ddd43a3a043a4d0fdc9026fb30fdc45cf"} Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.690218 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="634be8a4401daf1087438b6bbc45263ddd43a3a043a4d0fdc9026fb30fdc45cf" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.691118 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713212 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713711 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.713882 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") pod \"c17493c5-d958-46ab-8e02-d190b2fa6944\" (UID: \"c17493c5-d958-46ab-8e02-d190b2fa6944\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.714328 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.714404 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.719327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm" (OuterVolumeSpecName: "kube-api-access-sf4lm") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "kube-api-access-sf4lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.720210 5136 projected.go:194] Error preparing data for projected volume kube-api-access-22gkc for pod openstack/keystone-e762-account-create-update-l99mm: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.720283 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:53.720264166 +0000 UTC m=+1585.979575317 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-22gkc" (UniqueName: "kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.721612 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: E0320 07:15:52.721711 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:53.721687641 +0000 UTC m=+1585.980998792 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : configmap "openstack-scripts" not found Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.734254 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.757573 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.784424 5136 scope.go:117] "RemoveContainer" containerID="2f108d33fa61f242b6db0d1497e869fa649b09a841ba1ec8a5200036f1da6f44" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.794988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.796680 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.805465 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.814958 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816413 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") pod \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816562 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") pod \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\" (UID: \"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816686 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") pod \"17669c27-ef49-4ced-a620-ef7394f02110\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.816762 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") pod \"17669c27-ef49-4ced-a620-ef7394f02110\" (UID: \"17669c27-ef49-4ced-a620-ef7394f02110\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.817972 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.818470 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sf4lm\" (UniqueName: \"kubernetes.io/projected/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-api-access-sf4lm\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.818493 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.818505 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.820259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" (UID: "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.820985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "17669c27-ef49-4ced-a620-ef7394f02110" (UID: "17669c27-ef49-4ced-a620-ef7394f02110"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.824500 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt" (OuterVolumeSpecName: "kube-api-access-jzplt") pod "17669c27-ef49-4ced-a620-ef7394f02110" (UID: "17669c27-ef49-4ced-a620-ef7394f02110"). InnerVolumeSpecName "kube-api-access-jzplt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.846783 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-a0f6-account-create-update-d5xps"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.853506 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x" (OuterVolumeSpecName: "kube-api-access-5h64x") pod "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" (UID: "12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3"). InnerVolumeSpecName "kube-api-access-5h64x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.864287 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "c17493c5-d958-46ab-8e02-d190b2fa6944" (UID: "c17493c5-d958-46ab-8e02-d190b2fa6944"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.871961 5136 scope.go:117] "RemoveContainer" containerID="efcc419ead7f776e9a762552e20519a145846b32963d5cf946e1216d1b57e9d3" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.897313 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.906713 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.909694 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.919113 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.919879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") pod \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920035 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") pod \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\" (UID: \"6638ac71-bcca-4dbb-9ec3-d9ef0da336db\") " Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920489 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/17669c27-ef49-4ced-a620-ef7394f02110-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920554 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5h64x\" (UniqueName: \"kubernetes.io/projected/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-kube-api-access-5h64x\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920619 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/c17493c5-d958-46ab-8e02-d190b2fa6944-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920695 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920768 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzplt\" (UniqueName: \"kubernetes.io/projected/17669c27-ef49-4ced-a620-ef7394f02110-kube-api-access-jzplt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.920707 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6638ac71-bcca-4dbb-9ec3-d9ef0da336db" (UID: "6638ac71-bcca-4dbb-9ec3-d9ef0da336db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.929681 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2" (OuterVolumeSpecName: "kube-api-access-x89p2") pod "6638ac71-bcca-4dbb-9ec3-d9ef0da336db" (UID: "6638ac71-bcca-4dbb-9ec3-d9ef0da336db"). InnerVolumeSpecName "kube-api-access-x89p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.930588 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.947339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:15:52 crc kubenswrapper[5136]: I0320 07:15:52.980231 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022101 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022308 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022853 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.022931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023086 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023315 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023530 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023704 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023774 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.023935 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024000 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") pod \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\" (UID: \"9dc2d320-2468-4a45-ba6b-69ea478b5e8c\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024070 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") pod \"af66742a-1452-436f-a22e-7dc277cf690a\" (UID: \"af66742a-1452-436f-a22e-7dc277cf690a\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024140 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024223 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") pod \"fe20adf9-d6e2-4487-a176-32ddd55eb051\" (UID: \"fe20adf9-d6e2-4487-a176-32ddd55eb051\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024717 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.024781 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x89p2\" (UniqueName: \"kubernetes.io/projected/6638ac71-bcca-4dbb-9ec3-d9ef0da336db-kube-api-access-x89p2\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.025338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs" (OuterVolumeSpecName: "logs") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.028233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts" (OuterVolumeSpecName: "scripts") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.029185 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs" (OuterVolumeSpecName: "logs") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.030147 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg" (OuterVolumeSpecName: "kube-api-access-krchg") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "kube-api-access-krchg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.030159 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.030539 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs" (OuterVolumeSpecName: "logs") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.032038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs" (OuterVolumeSpecName: "kube-api-access-88mxs") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "kube-api-access-88mxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.080997 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.081517 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh" (OuterVolumeSpecName: "kube-api-access-b8jnh") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "kube-api-access-b8jnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.115313 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data" (OuterVolumeSpecName: "config-data") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.132927 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133470 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133494 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krchg\" (UniqueName: \"kubernetes.io/projected/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-kube-api-access-krchg\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133504 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/af66742a-1452-436f-a22e-7dc277cf690a-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133512 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133520 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88mxs\" (UniqueName: \"kubernetes.io/projected/af66742a-1452-436f-a22e-7dc277cf690a-kube-api-access-88mxs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133528 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133536 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8jnh\" (UniqueName: \"kubernetes.io/projected/fe20adf9-d6e2-4487-a176-32ddd55eb051-kube-api-access-b8jnh\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133554 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133562 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.133572 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe20adf9-d6e2-4487-a176-32ddd55eb051-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.134531 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.134597 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts podName:5d2085e7-db7e-4655-965c-027d03e474e0 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:54.134578179 +0000 UTC m=+1586.393889330 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts") pod "root-account-create-update-mzns4" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0") : configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.152895 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data" (OuterVolumeSpecName: "config-data") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.163219 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.183794 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.190582 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.197189 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.208899 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.217110 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234702 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234757 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234777 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234822 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234845 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234920 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234959 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.234976 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235003 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235028 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235094 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235113 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") pod \"76d08c01-d488-4f36-9998-7f074633c7c5\" (UID: \"76d08c01-d488-4f36-9998-7f074633c7c5\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235206 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235575 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235593 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235602 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235612 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235620 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.235629 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.236284 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs" (OuterVolumeSpecName: "logs") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.243064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts" (OuterVolumeSpecName: "scripts") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.251481 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.251847 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs" (OuterVolumeSpecName: "logs") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.252287 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.257739 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data" (OuterVolumeSpecName: "config-data") pod "fe20adf9-d6e2-4487-a176-32ddd55eb051" (UID: "fe20adf9-d6e2-4487-a176-32ddd55eb051"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259131 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn" (OuterVolumeSpecName: "kube-api-access-scwjn") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "kube-api-access-scwjn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259694 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts" (OuterVolumeSpecName: "scripts") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.259793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.260881 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj" (OuterVolumeSpecName: "kube-api-access-jl6kj") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "kube-api-access-jl6kj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.281083 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.295672 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.336386 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.336577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") pod \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\" (UID: \"141e5942-2bf9-424c-a6a7-7c93afdad7dc\") " Mar 20 07:15:53 crc kubenswrapper[5136]: W0320 07:15:53.336729 5136 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/141e5942-2bf9-424c-a6a7-7c93afdad7dc/volumes/kubernetes.io~secret/internal-tls-certs Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.336750 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337105 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337127 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337172 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jl6kj\" (UniqueName: \"kubernetes.io/projected/141e5942-2bf9-424c-a6a7-7c93afdad7dc-kube-api-access-jl6kj\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337184 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/76d08c01-d488-4f36-9998-7f074633c7c5-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337196 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe20adf9-d6e2-4487-a176-32ddd55eb051-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337207 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337218 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337228 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337238 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337248 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/76d08c01-d488-4f36-9998-7f074633c7c5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337260 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scwjn\" (UniqueName: \"kubernetes.io/projected/76d08c01-d488-4f36-9998-7f074633c7c5-kube-api-access-scwjn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337270 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/141e5942-2bf9-424c-a6a7-7c93afdad7dc-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337280 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.337292 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.374989 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "af66742a-1452-436f-a22e-7dc277cf690a" (UID: "af66742a-1452-436f-a22e-7dc277cf690a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.386980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9dc2d320-2468-4a45-ba6b-69ea478b5e8c" (UID: "9dc2d320-2468-4a45-ba6b-69ea478b5e8c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.388578 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.415493 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.452001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.455041 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data" (OuterVolumeSpecName: "config-data") pod "141e5942-2bf9-424c-a6a7-7c93afdad7dc" (UID: "141e5942-2bf9-424c-a6a7-7c93afdad7dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465839 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465870 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9dc2d320-2468-4a45-ba6b-69ea478b5e8c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465882 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465893 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465903 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/141e5942-2bf9-424c-a6a7-7c93afdad7dc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.465916 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/af66742a-1452-436f-a22e-7dc277cf690a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.505369 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.535105 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data" (OuterVolumeSpecName: "config-data") pod "76d08c01-d488-4f36-9998-7f074633c7c5" (UID: "76d08c01-d488-4f36-9998-7f074633c7c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.567979 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.568003 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76d08c01-d488-4f36-9998-7f074633c7c5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.710103 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.710127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-0f90-account-create-update-k7zvd" event={"ID":"6638ac71-bcca-4dbb-9ec3-d9ef0da336db","Type":"ContainerDied","Data":"85c5ef70686107412e859254f31a559f15711b7d6e9fc5a62fab2055603accd9"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.713854 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fe20adf9-d6e2-4487-a176-32ddd55eb051","Type":"ContainerDied","Data":"9a85481c71cfaf5395cd3f9b7fc38785dc4f071aee32ec8ab4a9c0c94e256ebb"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.713930 5136 scope.go:117] "RemoveContainer" containerID="08811e57d5ae08f29bf6ac8aa7f95e929dc4a7c310d13f900fb2c645979418d6" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.714164 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.731621 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"af66742a-1452-436f-a22e-7dc277cf690a","Type":"ContainerDied","Data":"5ace71694fb7cbf300c7ddec51eede62a282d34c72dfdbb59b1c4aee86e0f888"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.731799 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.742716 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" event={"ID":"12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3","Type":"ContainerDied","Data":"c7edf0f3f6556e6164f7e7ded6cdbe431275f40b863ad60fee11457248b95cfe"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.742739 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e3bd-account-create-update-gzjnr" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.772157 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.772273 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") pod \"keystone-e762-account-create-update-l99mm\" (UID: \"d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7\") " pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.772409 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.772452 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:55.772438576 +0000 UTC m=+1588.031749727 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : configmap "openstack-scripts" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.776397 5136 projected.go:194] Error preparing data for projected volume kube-api-access-22gkc for pod openstack/keystone-e762-account-create-update-l99mm: failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:53 crc kubenswrapper[5136]: E0320 07:15:53.776447 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc podName:d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:55.776434271 +0000 UTC m=+1588.035745422 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-22gkc" (UniqueName: "kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc") pod "keystone-e762-account-create-update-l99mm" (UID: "d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7") : failed to fetch token: serviceaccounts "galera-openstack" not found Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.788454 5136 generic.go:334] "Generic (PLEG): container finished" podID="2a59ab3d-3094-4e10-bbde-44479696f752" containerID="cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.788467 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerDied","Data":"cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790643 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd71646c-cb64-4a01-8076-449c812955d5" containerID="605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790687 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerDied","Data":"605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-dc8db4fdb-hpjdg" event={"ID":"bd71646c-cb64-4a01-8076-449c812955d5","Type":"ContainerDied","Data":"456674fb963104b875873a874337c20143adc46f3a809c5e2ae04c7d773c4641"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.790715 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="456674fb963104b875873a874337c20143adc46f3a809c5e2ae04c7d773c4641" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.792235 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" event={"ID":"17669c27-ef49-4ced-a620-ef7394f02110","Type":"ContainerDied","Data":"f713db634b57c569428b9818f10efddf9949e7de62c4b479a6b5e91e44342d03"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.792344 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0423-account-create-update-9sp6w" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.801922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.801918 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9dc2d320-2468-4a45-ba6b-69ea478b5e8c","Type":"ContainerDied","Data":"3e1ef3947decd903ee68750e425aae8b49ed5ff25716e3720a5bb3892e21abbc"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.808722 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"141e5942-2bf9-424c-a6a7-7c93afdad7dc","Type":"ContainerDied","Data":"05a82ec3ba1c0f76f4e62724ce02ba143154e57bc0f0c3ee005d1f4b00278ffd"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.808799 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.816738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"76d08c01-d488-4f36-9998-7f074633c7c5","Type":"ContainerDied","Data":"aab757c2dc94939d0797a6c8423da847cee9bc95e622fe61ec74ea5928df97cd"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.816861 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.823883 5136 generic.go:334] "Generic (PLEG): container finished" podID="27a464a7-cea7-4265-a264-85a991452e95" containerID="cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.823958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.855566 5136 generic.go:334] "Generic (PLEG): container finished" podID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerID="184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7" exitCode=0 Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.855632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerDied","Data":"184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7"} Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.865496 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-e762-account-create-update-l99mm" Mar 20 07:15:53 crc kubenswrapper[5136]: I0320 07:15:53.865573 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.146332 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.155561 5136 scope.go:117] "RemoveContainer" containerID="ddf75c942dcbf834dfd88d5bc8a1e8a0fa00deb6223f84700bb4c75bb0cce612" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.173671 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.178903 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179045 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179113 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179182 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179217 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179237 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.179264 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") pod \"bd71646c-cb64-4a01-8076-449c812955d5\" (UID: \"bd71646c-cb64-4a01-8076-449c812955d5\") " Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.179768 5136 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.179850 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts podName:5d2085e7-db7e-4655-965c-027d03e474e0 nodeName:}" failed. No retries permitted until 2026-03-20 07:15:56.179832564 +0000 UTC m=+1588.439143715 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts") pod "root-account-create-update-mzns4" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0") : configmap "openstack-scripts" not found Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.181170 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs" (OuterVolumeSpecName: "logs") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.195429 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8" (OuterVolumeSpecName: "kube-api-access-ktsr8") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "kube-api-access-ktsr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.196188 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts" (OuterVolumeSpecName: "scripts") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.233347 5136 scope.go:117] "RemoveContainer" containerID="f58e5688042713c0b783865903009ad2d11f4198be5353e16c7c078fdfea1674" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.286719 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.286946 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.287068 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.287174 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.287217 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") pod \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\" (UID: \"960739f0-c4a5-49c6-8e2a-9452815cf1a9\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.288596 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktsr8\" (UniqueName: \"kubernetes.io/projected/bd71646c-cb64-4a01-8076-449c812955d5-kube-api-access-ktsr8\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.288628 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.288651 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bd71646c-cb64-4a01-8076-449c812955d5-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.292788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.299810 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.301576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.308576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.316833 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.319941 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data" (OuterVolumeSpecName: "config-data") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.332927 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.333736 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz" (OuterVolumeSpecName: "kube-api-access-lmhrz") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "kube-api-access-lmhrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.336120 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data" (OuterVolumeSpecName: "config-data") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.353079 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.388514 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391146 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391218 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391241 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391283 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391384 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391472 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391496 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391512 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391530 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") pod \"2a59ab3d-3094-4e10-bbde-44479696f752\" (UID: \"2a59ab3d-3094-4e10-bbde-44479696f752\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.391591 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") pod \"27a464a7-cea7-4265-a264-85a991452e95\" (UID: \"27a464a7-cea7-4265-a264-85a991452e95\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392376 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392444 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392497 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmhrz\" (UniqueName: \"kubernetes.io/projected/960739f0-c4a5-49c6-8e2a-9452815cf1a9-kube-api-access-lmhrz\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.392551 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/960739f0-c4a5-49c6-8e2a-9452815cf1a9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.393197 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.395339 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts" (OuterVolumeSpecName: "scripts") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.396366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs" (OuterVolumeSpecName: "logs") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.396416 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e3bd-account-create-update-gzjnr"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.396586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.422063 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl" (OuterVolumeSpecName: "kube-api-access-zq8fl") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "kube-api-access-zq8fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.479010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.479527 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs" (OuterVolumeSpecName: "kube-api-access-ctrhs") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "kube-api-access-ctrhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.482979 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3" path="/var/lib/kubelet/pods/12a2cdc2-1b05-4bfd-99e9-ce92d81d3af3/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.483449 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1490877f-a8fa-4bcd-8c33-be84b9b890aa" path="/var/lib/kubelet/pods/1490877f-a8fa-4bcd-8c33-be84b9b890aa/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.483968 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210df7e5-1603-40ec-bfa4-7b85525823b3" path="/var/lib/kubelet/pods/210df7e5-1603-40ec-bfa4-7b85525823b3/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.484554 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38885968-65f8-45e9-8e72-7464d5e78b85" path="/var/lib/kubelet/pods/38885968-65f8-45e9-8e72-7464d5e78b85/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.485607 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" path="/var/lib/kubelet/pods/76d08c01-d488-4f36-9998-7f074633c7c5/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.486458 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af66742a-1452-436f-a22e-7dc277cf690a" path="/var/lib/kubelet/pods/af66742a-1452-436f-a22e-7dc277cf690a/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.487651 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f872c575-a357-4b29-b5e8-cf5dbe6f3d7a" path="/var/lib/kubelet/pods/f872c575-a357-4b29-b5e8-cf5dbe6f3d7a/volumes" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495889 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2a59ab3d-3094-4e10-bbde-44479696f752-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495948 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495960 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctrhs\" (UniqueName: \"kubernetes.io/projected/2a59ab3d-3094-4e10-bbde-44479696f752-kube-api-access-ctrhs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495969 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495978 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495989 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq8fl\" (UniqueName: \"kubernetes.io/projected/27a464a7-cea7-4265-a264-85a991452e95-kube-api-access-zq8fl\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.495998 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/27a464a7-cea7-4265-a264-85a991452e95-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.510308 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.525388 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.543011 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.560985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.578529 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data" (OuterVolumeSpecName: "config-data") pod "2a59ab3d-3094-4e10-bbde-44479696f752" (UID: "2a59ab3d-3094-4e10-bbde-44479696f752"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.581267 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.589027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.589102 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "960739f0-c4a5-49c6-8e2a-9452815cf1a9" (UID: "960739f0-c4a5-49c6-8e2a-9452815cf1a9"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601154 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601229 5136 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/960739f0-c4a5-49c6-8e2a-9452815cf1a9-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601240 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601270 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601280 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601289 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601298 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2a59ab3d-3094-4e10-bbde-44479696f752-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.601307 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.609088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data" (OuterVolumeSpecName: "config-data") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.612545 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27a464a7-cea7-4265-a264-85a991452e95" (UID: "27a464a7-cea7-4265-a264-85a991452e95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.629711 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bd71646c-cb64-4a01-8076-449c812955d5" (UID: "bd71646c-cb64-4a01-8076-449c812955d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663048 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663090 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663103 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663114 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663127 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663137 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-e762-account-create-update-l99mm"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663149 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663159 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0423-account-create-update-9sp6w"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663185 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-0f90-account-create-update-k7zvd"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663197 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663212 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663224 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.663237 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.671259 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.676532 5136 scope.go:117] "RemoveContainer" containerID="deb975c81a5be70590a5a6b6abaf2e6a45b6848b791c313d96f348b2eb335fa8" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.677211 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7acbc76f-ff83-451e-826f-5fd1f977f74f/ovn-northd/0.log" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.677261 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.700171 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702081 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702131 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702207 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702228 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702250 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702301 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702319 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702337 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702362 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702401 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702467 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") pod \"23c10323-3c49-4f00-8bf7-319e6f5834d0\" (UID: \"23c10323-3c49-4f00-8bf7-319e6f5834d0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702493 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") pod \"7acbc76f-ff83-451e-826f-5fd1f977f74f\" (UID: \"7acbc76f-ff83-451e-826f-5fd1f977f74f\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702809 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702839 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bd71646c-cb64-4a01-8076-449c812955d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702852 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22gkc\" (UniqueName: \"kubernetes.io/projected/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-kube-api-access-22gkc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702862 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.702874 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/27a464a7-cea7-4265-a264-85a991452e95-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.703537 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config" (OuterVolumeSpecName: "config") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.703599 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.703902 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.704184 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts" (OuterVolumeSpecName: "scripts") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.704509 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.704716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.705069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.717259 5136 scope.go:117] "RemoveContainer" containerID="d7a0a1abaf7649e23b98061506f3cd0ac2d6d6bb3c69694e84fdfcd0ac7ca124" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.722780 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz" (OuterVolumeSpecName: "kube-api-access-jdvcz") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "kube-api-access-jdvcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.731300 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "mysql-db") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.739572 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx" (OuterVolumeSpecName: "kube-api-access-kmgbx") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "kube-api-access-kmgbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.741293 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.741694 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.745142 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.751009 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.760215 5136 scope.go:117] "RemoveContainer" containerID="a74fc94f08f1ff50393fca876adcf8dbd23397ae767f8be9bb23ab400c14c48a" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.780696 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.781069 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.781422 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.781450 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.783394 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.787464 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "23c10323-3c49-4f00-8bf7-319e6f5834d0" (UID: "23c10323-3c49-4f00-8bf7-319e6f5834d0"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.791986 5136 scope.go:117] "RemoveContainer" containerID="3cd1e6e7f78367e01aa376387fc42404408757a25929ca8c639774857d99ccfd" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.792084 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.792122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.794021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7acbc76f-ff83-451e-826f-5fd1f977f74f" (UID: "7acbc76f-ff83-451e-826f-5fd1f977f74f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.794047 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:54 crc kubenswrapper[5136]: E0320 07:15:54.794121 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803658 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803782 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") pod \"5d2085e7-db7e-4655-965c-027d03e474e0\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803893 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803925 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803956 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") pod \"5d2085e7-db7e-4655-965c-027d03e474e0\" (UID: \"5d2085e7-db7e-4655-965c-027d03e474e0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.803977 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804007 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804030 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804060 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804106 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") pod \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\" (UID: \"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804122 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") pod \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\" (UID: \"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0\") " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d2085e7-db7e-4655-965c-027d03e474e0" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804852 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804871 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d2085e7-db7e-4655-965c-027d03e474e0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804883 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804902 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804912 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804922 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804932 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804942 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804951 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/23c10323-3c49-4f00-8bf7-319e6f5834d0-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804960 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdvcz\" (UniqueName: \"kubernetes.io/projected/7acbc76f-ff83-451e-826f-5fd1f977f74f-kube-api-access-jdvcz\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804969 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804977 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804986 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/23c10323-3c49-4f00-8bf7-319e6f5834d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.804996 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/7acbc76f-ff83-451e-826f-5fd1f977f74f-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.805004 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7acbc76f-ff83-451e-826f-5fd1f977f74f-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.805012 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmgbx\" (UniqueName: \"kubernetes.io/projected/23c10323-3c49-4f00-8bf7-319e6f5834d0-kube-api-access-kmgbx\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.809740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9" (OuterVolumeSpecName: "kube-api-access-jzlq9") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "kube-api-access-jzlq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.814741 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs" (OuterVolumeSpecName: "logs") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.814890 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs" (OuterVolumeSpecName: "logs") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.815853 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt" (OuterVolumeSpecName: "kube-api-access-knrlt") pod "5d2085e7-db7e-4655-965c-027d03e474e0" (UID: "5d2085e7-db7e-4655-965c-027d03e474e0"). InnerVolumeSpecName "kube-api-access-knrlt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.818113 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp" (OuterVolumeSpecName: "kube-api-access-gnkzp") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "kube-api-access-gnkzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.821622 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.824108 5136 scope.go:117] "RemoveContainer" containerID="662932f3f7c10a1e5293dce36be1a7b6f6fe4dc40e2e2d324b7c68188c034162" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.825460 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.833152 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.837784 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.844059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.848339 5136 scope.go:117] "RemoveContainer" containerID="f3795d92724d61612c4e998a075d7ebdd89fee122f5c02cbcebdad3f46cd4b7c" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.860719 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data" (OuterVolumeSpecName: "config-data") pod "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" (UID: "6fd2bfe2-2220-4617-ac9a-d02f6222cfd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.864830 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.868667 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data" (OuterVolumeSpecName: "config-data") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.869425 5136 scope.go:117] "RemoveContainer" containerID="c15b277e6d0d090e0e5755609decc556ab1c1f03a878f14749a60fdfeeec941e" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882729 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_7acbc76f-ff83-451e-826f-5fd1f977f74f/ovn-northd/0.log" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882778 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" (UID: "f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882780 5136 generic.go:334] "Generic (PLEG): container finished" podID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" exitCode=139 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882802 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerDied","Data":"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.883049 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"7acbc76f-ff83-451e-826f-5fd1f977f74f","Type":"ContainerDied","Data":"f77fa3b0a75190383cf99cb089377cd7d03639ec4bda09b8550cc55a30016174"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.882881 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889797 5136 generic.go:334] "Generic (PLEG): container finished" podID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" exitCode=0 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerDied","Data":"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889906 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-64845646dd-wf28v" event={"ID":"f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b","Type":"ContainerDied","Data":"a17e5911dd44a70eaddf965e56b21ab05149f56b96de7f20bf6f4c657c514884"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.889970 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-64845646dd-wf28v" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.902748 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.902773 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-65ccfb89b4-s479g" event={"ID":"2a59ab3d-3094-4e10-bbde-44479696f752","Type":"ContainerDied","Data":"adeda094c452ea454b57acc36b81655df6fbdea86bb257845c27b0e1e0656a6f"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906420 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906442 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906451 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906460 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knrlt\" (UniqueName: \"kubernetes.io/projected/5d2085e7-db7e-4655-965c-027d03e474e0-kube-api-access-knrlt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906469 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnkzp\" (UniqueName: \"kubernetes.io/projected/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-kube-api-access-gnkzp\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906477 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzlq9\" (UniqueName: \"kubernetes.io/projected/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-kube-api-access-jzlq9\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906485 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906494 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906503 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-logs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906510 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906518 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906526 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906533 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.906540 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.907269 5136 scope.go:117] "RemoveContainer" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.928374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"960739f0-c4a5-49c6-8e2a-9452815cf1a9","Type":"ContainerDied","Data":"c417f48a18610bbcb3a324c0dd0cc757f54ca7629176ea2441d7c501e41142ce"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.928493 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.937147 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.951987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"27a464a7-cea7-4265-a264-85a991452e95","Type":"ContainerDied","Data":"a213c0799494e4283f552e4529c929904c7d07c101510facaefb1e2a3e99ab9c"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.952216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.956085 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-mzns4" event={"ID":"5d2085e7-db7e-4655-965c-027d03e474e0","Type":"ContainerDied","Data":"2b8d445e4425096daf41465721adf2ee58e490471ea6782e4e955f4d28582fd2"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.956216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-mzns4" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerDied","Data":"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958652 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958714 5136 generic.go:334] "Generic (PLEG): container finished" podID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" exitCode=0 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.958843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"23c10323-3c49-4f00-8bf7-319e6f5834d0","Type":"ContainerDied","Data":"472fbef87977bdfc11603315d743a17729300016a4f32222d159ed871e8ca38d"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.973681 5136 scope.go:117] "RemoveContainer" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.974760 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-65ccfb89b4-s479g"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.978547 5136 generic.go:334] "Generic (PLEG): container finished" podID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" exitCode=0 Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.978649 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-dc8db4fdb-hpjdg" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.981967 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-78df67c79-bqz8t" Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.981992 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerDied","Data":"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.982053 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-78df67c79-bqz8t" event={"ID":"6fd2bfe2-2220-4617-ac9a-d02f6222cfd0","Type":"ContainerDied","Data":"87dc6bb8fc1b9abd24b71389abdb4a22e7af9a9d787041070ce4c3a66cfdd142"} Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.985632 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:15:54 crc kubenswrapper[5136]: I0320 07:15:54.995015 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.013109 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.022147 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-64845646dd-wf28v"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.026624 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.029190 5136 scope.go:117] "RemoveContainer" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.031683 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf\": container with ID starting with e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf not found: ID does not exist" containerID="e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.031801 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf"} err="failed to get container status \"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf\": rpc error: code = NotFound desc = could not find container \"e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf\": container with ID starting with e19dbd1e39e5efc5a6a0b99e7790d9fb9e1136146e9bec8dcf45b6006114f4bf not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.031907 5136 scope.go:117] "RemoveContainer" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.032297 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242\": container with ID starting with 91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242 not found: ID does not exist" containerID="91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.032338 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242"} err="failed to get container status \"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242\": rpc error: code = NotFound desc = could not find container \"91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242\": container with ID starting with 91d0a1875daaba835fa95deb20e986d2977475fb7ef981c80c1940a9ae1d4242 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.032366 5136 scope.go:117] "RemoveContainer" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.036339 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.044144 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.055554 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.069548 5136 scope.go:117] "RemoveContainer" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.071354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.078024 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.090021 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092083 5136 scope.go:117] "RemoveContainer" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.092509 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd\": container with ID starting with b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd not found: ID does not exist" containerID="b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092541 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd"} err="failed to get container status \"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd\": rpc error: code = NotFound desc = could not find container \"b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd\": container with ID starting with b31f76eec4146fd8f445255e6551fe34e753f46896981935d5fb62c7bb4ad9cd not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092561 5136 scope.go:117] "RemoveContainer" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.092867 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536\": container with ID starting with 4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536 not found: ID does not exist" containerID="4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092888 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536"} err="failed to get container status \"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536\": rpc error: code = NotFound desc = could not find container \"4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536\": container with ID starting with 4829942c2c5a456ff9cc10622e102ac04c1aa13d5b8679ee118976d4497d7536 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.092901 5136 scope.go:117] "RemoveContainer" containerID="cd65718bfac09f4d934fe1bf3f629f5d852e12343f9a1b480d6984e7497c79aa" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.105391 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-mzns4"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.110859 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.116771 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-dc8db4fdb-hpjdg"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.121629 5136 scope.go:117] "RemoveContainer" containerID="afa35db5921ff57fdde3528ca1cd9c650dbf2f2ac6c46cf9723cca19a0edb997" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.121778 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.126596 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-78df67c79-bqz8t"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.144263 5136 scope.go:117] "RemoveContainer" containerID="184e304c1ac08ec0deea0a800adeaeabbaf3a333a8f4d43b893a721e45afd9b7" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.215376 5136 scope.go:117] "RemoveContainer" containerID="6f8eb1aeebd08bbda86b110a89e3d6395071812ae31b73c86592f671595b894d" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.286443 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.297220 5136 scope.go:117] "RemoveContainer" containerID="37086a66c3062e12cadb5382a0b51ad5a523fc39db2e404a6feda0518d0eb230" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.308693 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.310616 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.310680 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.315262 5136 scope.go:117] "RemoveContainer" containerID="cf4672bac844a81b21416c7a8623ac1f87041db75209a7a401cb201726b76413" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.341873 5136 scope.go:117] "RemoveContainer" containerID="0ecf229966ba8c79d4898c6f188447ddf715aa5d36d580596d93cecd4aca45f3" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.368377 5136 scope.go:117] "RemoveContainer" containerID="7950202bc7e7645f213c50f85961805c7e38b2378de8c350f722fea9bf137e17" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.397994 5136 scope.go:117] "RemoveContainer" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.450624 5136 scope.go:117] "RemoveContainer" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.474028 5136 scope.go:117] "RemoveContainer" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.475172 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2\": container with ID starting with bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2 not found: ID does not exist" containerID="bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475201 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2"} err="failed to get container status \"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2\": rpc error: code = NotFound desc = could not find container \"bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2\": container with ID starting with bb7d36a7a8740f26ea095cd64110aa6bbc822a4f8057c379b5b6f3c659fbfbb2 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475220 5136 scope.go:117] "RemoveContainer" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.475562 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080\": container with ID starting with 1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080 not found: ID does not exist" containerID="1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475583 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080"} err="failed to get container status \"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080\": rpc error: code = NotFound desc = could not find container \"1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080\": container with ID starting with 1fc02239dd53f1d667b671e4bd013df8e870f1112190dc7d2376ed6447168080 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.475594 5136 scope.go:117] "RemoveContainer" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.505896 5136 scope.go:117] "RemoveContainer" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.532438 5136 scope.go:117] "RemoveContainer" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.538221 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0\": container with ID starting with dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0 not found: ID does not exist" containerID="dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.538261 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0"} err="failed to get container status \"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0\": rpc error: code = NotFound desc = could not find container \"dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0\": container with ID starting with dd15344d154a5f0b2c84bd203f89400d82384bc6a127a0645edaeb2339b3bbd0 not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.538282 5136 scope.go:117] "RemoveContainer" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" Mar 20 07:15:55 crc kubenswrapper[5136]: E0320 07:15:55.539578 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec\": container with ID starting with afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec not found: ID does not exist" containerID="afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.539622 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec"} err="failed to get container status \"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec\": rpc error: code = NotFound desc = could not find container \"afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec\": container with ID starting with afb608f3e5e7d52c4c062978fdcf9cb3c91a3375ac294e5934da524d82c58aec not found: ID does not exist" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.578058 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.636960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637009 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637028 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637046 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637100 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637118 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.637210 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") pod \"fab90141-26b4-4e46-a916-82190508d6e8\" (UID: \"fab90141-26b4-4e46-a916-82190508d6e8\") " Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.641440 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts" (OuterVolumeSpecName: "scripts") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.641487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2" (OuterVolumeSpecName: "kube-api-access-698x2") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "kube-api-access-698x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.642015 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.642542 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.658152 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.659502 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data" (OuterVolumeSpecName: "config-data") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.678121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.684549 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fab90141-26b4-4e46-a916-82190508d6e8" (UID: "fab90141-26b4-4e46-a916-82190508d6e8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738607 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738640 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738653 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-698x2\" (UniqueName: \"kubernetes.io/projected/fab90141-26b4-4e46-a916-82190508d6e8-kube-api-access-698x2\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738661 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738669 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738677 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738685 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.738693 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fab90141-26b4-4e46-a916-82190508d6e8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.786325 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output="command timed out" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.823083 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" probeResult="failure" output=< Mar 20 07:15:55 crc kubenswrapper[5136]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Mar 20 07:15:55 crc kubenswrapper[5136]: > Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994255 5136 generic.go:334] "Generic (PLEG): container finished" podID="fab90141-26b4-4e46-a916-82190508d6e8" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" exitCode=0 Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994305 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-766d94c967-pb9qd" Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994324 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerDied","Data":"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e"} Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994376 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-766d94c967-pb9qd" event={"ID":"fab90141-26b4-4e46-a916-82190508d6e8","Type":"ContainerDied","Data":"c19785656f47dd95cc1a27542636229f68d56209966c28654c4de9baa2a90613"} Mar 20 07:15:55 crc kubenswrapper[5136]: I0320 07:15:55.994399 5136 scope.go:117] "RemoveContainer" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.028574 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.032216 5136 scope.go:117] "RemoveContainer" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.033080 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e\": container with ID starting with 55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e not found: ID does not exist" containerID="55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.033119 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e"} err="failed to get container status \"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e\": rpc error: code = NotFound desc = could not find container \"55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e\": container with ID starting with 55a5dab58a97a1d0db07eef0ac66d273a9614c32013fdde5757d7e6aa9ea047e not found: ID does not exist" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.034883 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-766d94c967-pb9qd"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.035657 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.036837 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.037778 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.037839 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.244783 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.244875 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data podName:c355061d-c5fd-4655-aa7e-37b5a40a0400 nodeName:}" failed. No retries permitted until 2026-03-20 07:16:04.244859888 +0000 UTC m=+1596.504171039 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data") pod "rabbitmq-cell1-server-0" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400") : configmap "rabbitmq-cell1-config-data" not found Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.409928 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" path="/var/lib/kubelet/pods/141e5942-2bf9-424c-a6a7-7c93afdad7dc/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.410800 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17669c27-ef49-4ced-a620-ef7394f02110" path="/var/lib/kubelet/pods/17669c27-ef49-4ced-a620-ef7394f02110/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.411373 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" path="/var/lib/kubelet/pods/23c10323-3c49-4f00-8bf7-319e6f5834d0/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.412714 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27a464a7-cea7-4265-a264-85a991452e95" path="/var/lib/kubelet/pods/27a464a7-cea7-4265-a264-85a991452e95/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.413588 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" path="/var/lib/kubelet/pods/2a59ab3d-3094-4e10-bbde-44479696f752/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.414802 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" path="/var/lib/kubelet/pods/5d2085e7-db7e-4655-965c-027d03e474e0/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.415394 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6638ac71-bcca-4dbb-9ec3-d9ef0da336db" path="/var/lib/kubelet/pods/6638ac71-bcca-4dbb-9ec3-d9ef0da336db/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.415860 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" path="/var/lib/kubelet/pods/6fd2bfe2-2220-4617-ac9a-d02f6222cfd0/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.416626 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" path="/var/lib/kubelet/pods/7acbc76f-ff83-451e-826f-5fd1f977f74f/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.418035 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" path="/var/lib/kubelet/pods/960739f0-c4a5-49c6-8e2a-9452815cf1a9/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.418673 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" path="/var/lib/kubelet/pods/9dc2d320-2468-4a45-ba6b-69ea478b5e8c/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.419960 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd71646c-cb64-4a01-8076-449c812955d5" path="/var/lib/kubelet/pods/bd71646c-cb64-4a01-8076-449c812955d5/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.420683 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" path="/var/lib/kubelet/pods/c17493c5-d958-46ab-8e02-d190b2fa6944/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.421208 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7" path="/var/lib/kubelet/pods/d10d1a70-a7a8-49dd-914d-ed5bb6db2bb7/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.421597 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" path="/var/lib/kubelet/pods/f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.422832 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fab90141-26b4-4e46-a916-82190508d6e8" path="/var/lib/kubelet/pods/fab90141-26b4-4e46-a916-82190508d6e8/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.423477 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" path="/var/lib/kubelet/pods/fe20adf9-d6e2-4487-a176-32ddd55eb051/volumes" Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.802346 5136 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 20 07:15:56 crc kubenswrapper[5136]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-20T07:15:49Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 20 07:15:56 crc kubenswrapper[5136]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Mar 20 07:15:56 crc kubenswrapper[5136]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-gnwt6" message=< Mar 20 07:15:56 crc kubenswrapper[5136]: Exiting ovn-controller (1) [FAILED] Mar 20 07:15:56 crc kubenswrapper[5136]: Killing ovn-controller (1) [ OK ] Mar 20 07:15:56 crc kubenswrapper[5136]: 2026-03-20T07:15:49Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 20 07:15:56 crc kubenswrapper[5136]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Mar 20 07:15:56 crc kubenswrapper[5136]: > Mar 20 07:15:56 crc kubenswrapper[5136]: E0320 07:15:56.803986 5136 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 20 07:15:56 crc kubenswrapper[5136]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-03-20T07:15:49Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Mar 20 07:15:56 crc kubenswrapper[5136]: /etc/init.d/functions: line 589: 379 Alarm clock "$@" Mar 20 07:15:56 crc kubenswrapper[5136]: > pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" containerID="cri-o://a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.804034 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-gnwt6" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" containerID="cri-o://a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" gracePeriod=22 Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.933903 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.934636 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957322 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957374 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957420 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957470 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957498 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957523 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957542 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957571 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957630 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957657 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957683 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957743 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957847 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957847 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957895 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") pod \"261514f8-7734-423d-b15a-e83fdc2a85fd\" (UID: \"261514f8-7734-423d-b15a-e83fdc2a85fd\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957911 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.957938 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") pod \"c355061d-c5fd-4655-aa7e-37b5a40a0400\" (UID: \"c355061d-c5fd-4655-aa7e-37b5a40a0400\") " Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.958177 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.958210 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.958620 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.959403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.960128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.973255 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.981343 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt" (OuterVolumeSpecName: "kube-api-access-p49dt") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "kube-api-access-p49dt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.981521 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info" (OuterVolumeSpecName: "pod-info") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.982355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55" (OuterVolumeSpecName: "kube-api-access-skh55") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "kube-api-access-skh55". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.983034 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.983209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info" (OuterVolumeSpecName: "pod-info") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.983358 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.985152 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:56 crc kubenswrapper[5136]: I0320 07:15:56.986864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.005055 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.008937 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.017144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data" (OuterVolumeSpecName: "config-data") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.028525 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data" (OuterVolumeSpecName: "config-data") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.032906 5136 generic.go:334] "Generic (PLEG): container finished" podID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" exitCode=0 Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerDied","Data":"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"c355061d-c5fd-4655-aa7e-37b5a40a0400","Type":"ContainerDied","Data":"22b2668fe332b62f7864af2d759b5866cf033333320267d52cb7cec04a426bd9"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033069 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.033111 5136 scope.go:117] "RemoveContainer" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.035432 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnwt6_04ee32c0-35eb-488d-b166-0ad8a8d09f48/ovn-controller/0.log" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.035465 5136 generic.go:334] "Generic (PLEG): container finished" podID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerID="a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" exitCode=139 Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.035515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerDied","Data":"a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053393 5136 generic.go:334] "Generic (PLEG): container finished" podID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" exitCode=0 Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053438 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerDied","Data":"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"261514f8-7734-423d-b15a-e83fdc2a85fd","Type":"ContainerDied","Data":"3dd70fd22c8a29190bae59972f90dd4530137cb93e7e5a8ebd5a576dd4e2a33b"} Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053537 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.053956 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf" (OuterVolumeSpecName: "server-conf") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.058269 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf" (OuterVolumeSpecName: "server-conf") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059110 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059129 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059139 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059167 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059176 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/c355061d-c5fd-4655-aa7e-37b5a40a0400-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059186 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/261514f8-7734-423d-b15a-e83fdc2a85fd-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059200 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059209 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059217 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059225 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059232 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/c355061d-c5fd-4655-aa7e-37b5a40a0400-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059240 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059248 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skh55\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-kube-api-access-skh55\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059256 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059264 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/261514f8-7734-423d-b15a-e83fdc2a85fd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059272 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/261514f8-7734-423d-b15a-e83fdc2a85fd-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059280 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p49dt\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-kube-api-access-p49dt\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059288 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.059297 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/c355061d-c5fd-4655-aa7e-37b5a40a0400-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.061546 5136 scope.go:117] "RemoveContainer" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.073702 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "c355061d-c5fd-4655-aa7e-37b5a40a0400" (UID: "c355061d-c5fd-4655-aa7e-37b5a40a0400"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.073923 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.074509 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.081946 5136 scope.go:117] "RemoveContainer" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.082655 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357\": container with ID starting with ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357 not found: ID does not exist" containerID="ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.082700 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357"} err="failed to get container status \"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357\": rpc error: code = NotFound desc = could not find container \"ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357\": container with ID starting with ff4b7aff265183005eb642c68e8667907af1fc5245ae33df31209cf856166357 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.082728 5136 scope.go:117] "RemoveContainer" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.084337 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71\": container with ID starting with 746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71 not found: ID does not exist" containerID="746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.084374 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71"} err="failed to get container status \"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71\": rpc error: code = NotFound desc = could not find container \"746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71\": container with ID starting with 746322d2db71676f9bd31243b4275dc45e3e2e6e4004a673722e12fe50a71b71 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.084398 5136 scope.go:117] "RemoveContainer" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.099468 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "261514f8-7734-423d-b15a-e83fdc2a85fd" (UID: "261514f8-7734-423d-b15a-e83fdc2a85fd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.100850 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnwt6_04ee32c0-35eb-488d-b166-0ad8a8d09f48/ovn-controller/0.log" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.100909 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.101294 5136 scope.go:117] "RemoveContainer" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.120981 5136 scope.go:117] "RemoveContainer" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.121431 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155\": container with ID starting with b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155 not found: ID does not exist" containerID="b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.121460 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155"} err="failed to get container status \"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155\": rpc error: code = NotFound desc = could not find container \"b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155\": container with ID starting with b5366b3f1cab37ea1c4eff85c13a2c8a37fad4de97077b3f34ec7d7a1d871155 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.121482 5136 scope.go:117] "RemoveContainer" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" Mar 20 07:15:57 crc kubenswrapper[5136]: E0320 07:15:57.121681 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29\": container with ID starting with 3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29 not found: ID does not exist" containerID="3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.121704 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29"} err="failed to get container status \"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29\": rpc error: code = NotFound desc = could not find container \"3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29\": container with ID starting with 3fb57e476621bbb7ed332df0284e4781d9823f68927c324a91e0d733e1eccb29 not found: ID does not exist" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160173 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160255 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160927 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.160954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") pod \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\" (UID: \"04ee32c0-35eb-488d-b166-0ad8a8d09f48\") " Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161167 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run" (OuterVolumeSpecName: "var-run") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161359 5136 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161501 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161569 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161504 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts" (OuterVolumeSpecName: "scripts") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161646 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/c355061d-c5fd-4655-aa7e-37b5a40a0400-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.161762 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/261514f8-7734-423d-b15a-e83fdc2a85fd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.163520 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7" (OuterVolumeSpecName: "kube-api-access-xhfl7") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "kube-api-access-xhfl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.176651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.213757 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "04ee32c0-35eb-488d-b166-0ad8a8d09f48" (UID: "04ee32c0-35eb-488d-b166-0ad8a8d09f48"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262925 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262948 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhfl7\" (UniqueName: \"kubernetes.io/projected/04ee32c0-35eb-488d-b166-0ad8a8d09f48-kube-api-access-xhfl7\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262958 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04ee32c0-35eb-488d-b166-0ad8a8d09f48-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262968 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04ee32c0-35eb-488d-b166-0ad8a8d09f48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262976 5136 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.262984 5136 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04ee32c0-35eb-488d-b166-0ad8a8d09f48-var-log-ovn\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.380213 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.397071 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.410174 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:57 crc kubenswrapper[5136]: I0320 07:15:57.416994 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078287 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-gnwt6_04ee32c0-35eb-488d-b166-0ad8a8d09f48/ovn-controller/0.log" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078394 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-gnwt6" event={"ID":"04ee32c0-35eb-488d-b166-0ad8a8d09f48","Type":"ContainerDied","Data":"969e50d91cdce234e3ebd25af89de94a9345b9463c4d70197f2dbbaa911c914f"} Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078432 5136 scope.go:117] "RemoveContainer" containerID="a111f866aa708f4a724ab9b641db43d52d756bb1dc91884a9311a1d65141faff" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.078437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-gnwt6" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.083320 5136 generic.go:334] "Generic (PLEG): container finished" podID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" exitCode=0 Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.083362 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerDied","Data":"103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5"} Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.087262 5136 generic.go:334] "Generic (PLEG): container finished" podID="52463352-7504-47a4-92e5-d672bab85574" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" exitCode=0 Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.087296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerDied","Data":"f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9"} Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.132873 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.141996 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-gnwt6"] Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.158750 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.169:8776/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.239736 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.251891 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403325 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") pod \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403436 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") pod \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403765 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") pod \"52463352-7504-47a4-92e5-d672bab85574\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403844 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") pod \"52463352-7504-47a4-92e5-d672bab85574\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") pod \"52463352-7504-47a4-92e5-d672bab85574\" (UID: \"52463352-7504-47a4-92e5-d672bab85574\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.403906 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") pod \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\" (UID: \"f2e8f54f-5434-4cf0-94b9-38648bf7ba77\") " Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.405010 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" path="/var/lib/kubelet/pods/04ee32c0-35eb-488d-b166-0ad8a8d09f48/volumes" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.405912 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" path="/var/lib/kubelet/pods/261514f8-7734-423d-b15a-e83fdc2a85fd/volumes" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.407374 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" path="/var/lib/kubelet/pods/c355061d-c5fd-4655-aa7e-37b5a40a0400/volumes" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.408487 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz" (OuterVolumeSpecName: "kube-api-access-dzjvz") pod "52463352-7504-47a4-92e5-d672bab85574" (UID: "52463352-7504-47a4-92e5-d672bab85574"). InnerVolumeSpecName "kube-api-access-dzjvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.418257 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl" (OuterVolumeSpecName: "kube-api-access-vbvkl") pod "f2e8f54f-5434-4cf0-94b9-38648bf7ba77" (UID: "f2e8f54f-5434-4cf0-94b9-38648bf7ba77"). InnerVolumeSpecName "kube-api-access-vbvkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.429572 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data" (OuterVolumeSpecName: "config-data") pod "f2e8f54f-5434-4cf0-94b9-38648bf7ba77" (UID: "f2e8f54f-5434-4cf0-94b9-38648bf7ba77"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.429591 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "52463352-7504-47a4-92e5-d672bab85574" (UID: "52463352-7504-47a4-92e5-d672bab85574"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.430189 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2e8f54f-5434-4cf0-94b9-38648bf7ba77" (UID: "f2e8f54f-5434-4cf0-94b9-38648bf7ba77"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.433088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data" (OuterVolumeSpecName: "config-data") pod "52463352-7504-47a4-92e5-d672bab85574" (UID: "52463352-7504-47a4-92e5-d672bab85574"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505136 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505163 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505173 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzjvz\" (UniqueName: \"kubernetes.io/projected/52463352-7504-47a4-92e5-d672bab85574-kube-api-access-dzjvz\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505186 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52463352-7504-47a4-92e5-d672bab85574-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505194 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbvkl\" (UniqueName: \"kubernetes.io/projected/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-kube-api-access-vbvkl\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.505203 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2e8f54f-5434-4cf0-94b9-38648bf7ba77-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:15:58 crc kubenswrapper[5136]: I0320 07:15:58.685374 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.107:11211: i/o timeout" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.083018 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.083062 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-64845646dd-wf28v" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.164:9311/healthcheck\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.097204 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"f2e8f54f-5434-4cf0-94b9-38648bf7ba77","Type":"ContainerDied","Data":"cc1f222689540ab41cb0293a65f9305d971ad1f909b8b87d7d0b7c47db1a4f3a"} Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.097248 5136 scope.go:117] "RemoveContainer" containerID="103dd4a533822b5106c261bfa1f3d9201de7d9327b3b2827802ee9cd5b825fc5" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.097243 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.098807 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.098823 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"52463352-7504-47a4-92e5-d672bab85574","Type":"ContainerDied","Data":"d25cabad936d4a8da77263639f37547fcf3ffbbafde65e2d7285a8e382e5513c"} Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.118243 5136 scope.go:117] "RemoveContainer" containerID="f56d00c5a5534f7bc12a714ef821f350a8f4afb4f1d3b31f9016e24b955644f9" Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.153397 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.170236 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.178101 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: I0320 07:15:59.183909 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.779892 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780161 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780425 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780446 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.780826 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.781758 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.782780 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:15:59 crc kubenswrapper[5136]: E0320 07:15:59.782804 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127471 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127756 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127772 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127783 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127790 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127797 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127804 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127821 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127827 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127838 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127844 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127852 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127857 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127868 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127874 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127885 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="mysql-bootstrap" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127893 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="mysql-bootstrap" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127904 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127910 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127920 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127926 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127934 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127940 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127952 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127959 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127969 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127975 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.127986 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.127994 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128003 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128010 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128022 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128030 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128041 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128048 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128058 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128066 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128078 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128085 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128101 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128109 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128119 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128126 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128134 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128141 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128148 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128155 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128164 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128170 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128178 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128184 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128192 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128198 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128207 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128227 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128232 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128240 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128247 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128257 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128264 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128274 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128280 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128291 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128297 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128306 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128311 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="setup-container" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128321 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128327 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128337 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128342 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128352 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128357 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: E0320 07:16:00.128367 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128374 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128517 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128527 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128537 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="23c10323-3c49-4f00-8bf7-319e6f5834d0" containerName="galera" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128547 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c355061d-c5fd-4655-aa7e-37b5a40a0400" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128557 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128566 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128575 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ee32c0-35eb-488d-b166-0ad8a8d09f48" containerName="ovn-controller" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128585 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="sg-core" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128592 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dc2d320-2468-4a45-ba6b-69ea478b5e8c" containerName="nova-api-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128600 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128611 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128623 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="141e5942-2bf9-424c-a6a7-7c93afdad7dc" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128633 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a59ab3d-3094-4e10-bbde-44479696f752" containerName="barbican-keystone-listener-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128642 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="openstack-network-exporter" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128653 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128661 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-central-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128671 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd71646c-cb64-4a01-8076-449c812955d5" containerName="placement-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128678 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128684 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52463352-7504-47a4-92e5-d672bab85574" containerName="nova-cell1-conductor-conductor" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128693 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7acbc76f-ff83-451e-826f-5fd1f977f74f" containerName="ovn-northd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128700 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128709 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe20adf9-d6e2-4487-a176-32ddd55eb051" containerName="glance-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128721 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128728 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" containerName="nova-scheduler-scheduler" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128736 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="ceilometer-notification-agent" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128743 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="261514f8-7734-423d-b15a-e83fdc2a85fd" containerName="rabbitmq" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128752 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fab90141-26b4-4e46-a916-82190508d6e8" containerName="keystone-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128760 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d08c01-d488-4f36-9998-7f074633c7c5" containerName="cinder-api-log" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128768 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2bf7a9d-44f9-407f-8a6c-6bc56ddde30b" containerName="barbican-api" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128776 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c17493c5-d958-46ab-8e02-d190b2fa6944" containerName="kube-state-metrics" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128785 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="960739f0-c4a5-49c6-8e2a-9452815cf1a9" containerName="memcached" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128795 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27a464a7-cea7-4265-a264-85a991452e95" containerName="proxy-httpd" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128804 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fd2bfe2-2220-4617-ac9a-d02f6222cfd0" containerName="barbican-worker" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.128823 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="af66742a-1452-436f-a22e-7dc277cf690a" containerName="nova-metadata-metadata" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.129273 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.130553 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.132727 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.132874 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.134591 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.230534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"auto-csr-approver-29566516-2dnr7\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.332087 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"auto-csr-approver-29566516-2dnr7\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.352090 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"auto-csr-approver-29566516-2dnr7\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.415405 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52463352-7504-47a4-92e5-d672bab85574" path="/var/lib/kubelet/pods/52463352-7504-47a4-92e5-d672bab85574/volumes" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.417290 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2e8f54f-5434-4cf0-94b9-38648bf7ba77" path="/var/lib/kubelet/pods/f2e8f54f-5434-4cf0-94b9-38648bf7ba77/volumes" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.447279 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.885157 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:16:00 crc kubenswrapper[5136]: I0320 07:16:00.890117 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:16:01 crc kubenswrapper[5136]: I0320 07:16:01.122326 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" event={"ID":"3a1410b1-69b7-42b6-85c9-967dbbc05b08","Type":"ContainerStarted","Data":"cc1855cd77ceffda8a136d53ffe756e3172dcb6e0e61666af9010486d1f9e14d"} Mar 20 07:16:03 crc kubenswrapper[5136]: I0320 07:16:03.101320 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6ff4f58fb9-7gtff" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.159:9696/\": dial tcp 10.217.0.159:9696: connect: connection refused" Mar 20 07:16:03 crc kubenswrapper[5136]: I0320 07:16:03.146193 5136 generic.go:334] "Generic (PLEG): container finished" podID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerID="71a9b19bcf4bcf8c4a69410e7ffac0d108a4db9d76a7cd352479549f5c15e6f8" exitCode=0 Mar 20 07:16:03 crc kubenswrapper[5136]: I0320 07:16:03.146243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" event={"ID":"3a1410b1-69b7-42b6-85c9-967dbbc05b08","Type":"ContainerDied","Data":"71a9b19bcf4bcf8c4a69410e7ffac0d108a4db9d76a7cd352479549f5c15e6f8"} Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.546773 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.704706 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") pod \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\" (UID: \"3a1410b1-69b7-42b6-85c9-967dbbc05b08\") " Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.722722 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf" (OuterVolumeSpecName: "kube-api-access-bbdkf") pod "3a1410b1-69b7-42b6-85c9-967dbbc05b08" (UID: "3a1410b1-69b7-42b6-85c9-967dbbc05b08"). InnerVolumeSpecName "kube-api-access-bbdkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.779366 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.779831 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.780327 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.780393 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.785978 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.787250 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.790907 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:04 crc kubenswrapper[5136]: E0320 07:16:04.790939 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:04 crc kubenswrapper[5136]: I0320 07:16:04.807215 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbdkf\" (UniqueName: \"kubernetes.io/projected/3a1410b1-69b7-42b6-85c9-967dbbc05b08-kube-api-access-bbdkf\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.048553 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.166896 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" event={"ID":"3a1410b1-69b7-42b6-85c9-967dbbc05b08","Type":"ContainerDied","Data":"cc1855cd77ceffda8a136d53ffe756e3172dcb6e0e61666af9010486d1f9e14d"} Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.166934 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc1855cd77ceffda8a136d53ffe756e3172dcb6e0e61666af9010486d1f9e14d" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.166942 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566516-2dnr7" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168523 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" exitCode=0 Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerDied","Data":"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153"} Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168583 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6ff4f58fb9-7gtff" event={"ID":"5c52887a-70a8-4d00-a1f9-a5677fa48d1f","Type":"ContainerDied","Data":"6635a1786b5854425a3f89e3fd4433884c9eeba6fdc2878722b9acdad452ee38"} Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168600 5136 scope.go:117] "RemoveContainer" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.168608 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6ff4f58fb9-7gtff" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.189993 5136 scope.go:117] "RemoveContainer" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.211477 5136 scope.go:117] "RemoveContainer" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" Mar 20 07:16:05 crc kubenswrapper[5136]: E0320 07:16:05.211883 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685\": container with ID starting with f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685 not found: ID does not exist" containerID="f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.211943 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685"} err="failed to get container status \"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685\": rpc error: code = NotFound desc = could not find container \"f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685\": container with ID starting with f9410d85b9f6582bf241e2b71e3f09bc7bd5e4aee399a12327173bcac6224685 not found: ID does not exist" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.211965 5136 scope.go:117] "RemoveContainer" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213074 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213122 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: E0320 07:16:05.213084 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153\": container with ID starting with 8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153 not found: ID does not exist" containerID="8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213176 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153"} err="failed to get container status \"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153\": rpc error: code = NotFound desc = could not find container \"8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153\": container with ID starting with 8fe888c22382b419f6b05cbe043d6eb25912198e1d48b45463c92862edd75153 not found: ID does not exist" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213189 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213263 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213292 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213343 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.213366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") pod \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\" (UID: \"5c52887a-70a8-4d00-a1f9-a5677fa48d1f\") " Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.217839 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.218028 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q" (OuterVolumeSpecName: "kube-api-access-mj27q") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "kube-api-access-mj27q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.251183 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.251955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config" (OuterVolumeSpecName: "config") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.252000 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.252618 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.275037 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "5c52887a-70a8-4d00-a1f9-a5677fa48d1f" (UID: "5c52887a-70a8-4d00-a1f9-a5677fa48d1f"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.314942 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.314991 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315003 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315012 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj27q\" (UniqueName: \"kubernetes.io/projected/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-kube-api-access-mj27q\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315023 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315031 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.315039 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c52887a-70a8-4d00-a1f9-a5677fa48d1f-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.564110 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.579240 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6ff4f58fb9-7gtff"] Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.607582 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:16:05 crc kubenswrapper[5136]: I0320 07:16:05.612757 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566510-bn9cf"] Mar 20 07:16:06 crc kubenswrapper[5136]: I0320 07:16:06.406660 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17242c2e-8526-49cf-89dd-e35bd97c6626" path="/var/lib/kubelet/pods/17242c2e-8526-49cf-89dd-e35bd97c6626/volumes" Mar 20 07:16:06 crc kubenswrapper[5136]: I0320 07:16:06.407773 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" path="/var/lib/kubelet/pods/5c52887a-70a8-4d00-a1f9-a5677fa48d1f/volumes" Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.778699 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.780027 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.780447 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.780535 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.781852 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.783797 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.785837 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:09 crc kubenswrapper[5136]: E0320 07:16:09.785875 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.778872 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.779603 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.779848 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.779872 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.780250 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.781502 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.782724 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Mar 20 07:16:14 crc kubenswrapper[5136]: E0320 07:16:14.782793 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-ldp4w" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:18 crc kubenswrapper[5136]: I0320 07:16:18.927930 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:16:18 crc kubenswrapper[5136]: I0320 07:16:18.928207 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.008222 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ldp4w_5f48f721-42c9-4f2b-a461-2ad47a1dea3d/ovs-vswitchd/0.log" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.008972 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025660 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025736 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025755 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.025998 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026026 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026187 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026227 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026243 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") pod \"31adef78-59fe-4327-9586-0c12177c7bb7\" (UID: \"31adef78-59fe-4327-9586-0c12177c7bb7\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026284 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026303 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026358 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026408 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\" (UID: \"dd944fb6-1517-4f5b-b579-79d8f1f3da19\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026437 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.026471 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") pod \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\" (UID: \"5f48f721-42c9-4f2b-a461-2ad47a1dea3d\") " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.027886 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run" (OuterVolumeSpecName: "var-run") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.028735 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache" (OuterVolumeSpecName: "cache") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.028773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib" (OuterVolumeSpecName: "var-lib") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.028988 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.029312 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts" (OuterVolumeSpecName: "scripts") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.029996 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log" (OuterVolumeSpecName: "var-log") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.030253 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.030664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock" (OuterVolumeSpecName: "lock") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.033865 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034323 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf" (OuterVolumeSpecName: "kube-api-access-rqmjf") pod "5f48f721-42c9-4f2b-a461-2ad47a1dea3d" (UID: "5f48f721-42c9-4f2b-a461-2ad47a1dea3d"). InnerVolumeSpecName "kube-api-access-rqmjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034345 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk" (OuterVolumeSpecName: "kube-api-access-77dwk") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "kube-api-access-77dwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034326 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034414 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz" (OuterVolumeSpecName: "kube-api-access-482rz") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "kube-api-access-482rz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.034975 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts" (OuterVolumeSpecName: "scripts") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.068035 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.105505 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data" (OuterVolumeSpecName: "config-data") pod "31adef78-59fe-4327-9586-0c12177c7bb7" (UID: "31adef78-59fe-4327-9586-0c12177c7bb7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129206 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rqmjf\" (UniqueName: \"kubernetes.io/projected/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-kube-api-access-rqmjf\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129235 5136 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-log\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129244 5136 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-lib\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129253 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129280 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129290 5136 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-etc-ovs\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129298 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129308 5136 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-cache\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129316 5136 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/5f48f721-42c9-4f2b-a461-2ad47a1dea3d-var-run\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129325 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/31adef78-59fe-4327-9586-0c12177c7bb7-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129334 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-482rz\" (UniqueName: \"kubernetes.io/projected/31adef78-59fe-4327-9586-0c12177c7bb7-kube-api-access-482rz\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129343 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77dwk\" (UniqueName: \"kubernetes.io/projected/dd944fb6-1517-4f5b-b579-79d8f1f3da19-kube-api-access-77dwk\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129352 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129360 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129368 5136 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/dd944fb6-1517-4f5b-b579-79d8f1f3da19-lock\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129377 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.129384 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31adef78-59fe-4327-9586-0c12177c7bb7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.142010 5136 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.230971 5136 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.291397 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd944fb6-1517-4f5b-b579-79d8f1f3da19" (UID: "dd944fb6-1517-4f5b-b579-79d8f1f3da19"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.295690 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-ldp4w_5f48f721-42c9-4f2b-a461-2ad47a1dea3d/ovs-vswitchd/0.log" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296482 5136 generic.go:334] "Generic (PLEG): container finished" podID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" exitCode=137 Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296533 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-ldp4w" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296629 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296670 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-ldp4w" event={"ID":"5f48f721-42c9-4f2b-a461-2ad47a1dea3d","Type":"ContainerDied","Data":"ecef44b4bd97cd40f7c1c2de9472cdb09460ec1aa1b9eb32b1b7e366da3578d0"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.296689 5136 scope.go:117] "RemoveContainer" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300661 5136 generic.go:334] "Generic (PLEG): container finished" podID="31adef78-59fe-4327-9586-0c12177c7bb7" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" exitCode=137 Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300711 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerDied","Data":"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"31adef78-59fe-4327-9586-0c12177c7bb7","Type":"ContainerDied","Data":"68a0376e88b4b3da7cb1aed58c92f9e17081913aac9827be120b7a59b01a2ab0"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.300878 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310574 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" exitCode=137 Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310634 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"dd944fb6-1517-4f5b-b579-79d8f1f3da19","Type":"ContainerDied","Data":"d8cd982e91f64705da20c6a48fa3020dac8ffb0c31aec91bfc9c77ff27912742"} Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.310728 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.327528 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.332297 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd944fb6-1517-4f5b-b579-79d8f1f3da19-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.332993 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-ldp4w"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.338128 5136 scope.go:117] "RemoveContainer" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.358037 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.372339 5136 scope.go:117] "RemoveContainer" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.377760 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.383262 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.388875 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.406120 5136 scope.go:117] "RemoveContainer" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.406926 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c\": container with ID starting with f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c not found: ID does not exist" containerID="f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.406967 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c"} err="failed to get container status \"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c\": rpc error: code = NotFound desc = could not find container \"f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c\": container with ID starting with f116bfd07f5e12feaa9d3b2892c633756b41b62152939d7ad83568af5e65d36c not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.407000 5136 scope.go:117] "RemoveContainer" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.408017 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58\": container with ID starting with 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 not found: ID does not exist" containerID="5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408044 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58"} err="failed to get container status \"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58\": rpc error: code = NotFound desc = could not find container \"5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58\": container with ID starting with 5ddc526cdf5085005713b82ec2c6b127b06a4f42542a6a93dc9a58f656909c58 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408058 5136 scope.go:117] "RemoveContainer" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.408361 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56\": container with ID starting with 251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56 not found: ID does not exist" containerID="251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408381 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56"} err="failed to get container status \"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56\": rpc error: code = NotFound desc = could not find container \"251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56\": container with ID starting with 251425a84f071e1549cd0fc562a37265667b19ac3157615c6641ae85a12bec56 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.408396 5136 scope.go:117] "RemoveContainer" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.429797 5136 scope.go:117] "RemoveContainer" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.447097 5136 scope.go:117] "RemoveContainer" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.447646 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb\": container with ID starting with 46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb not found: ID does not exist" containerID="46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.447695 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb"} err="failed to get container status \"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb\": rpc error: code = NotFound desc = could not find container \"46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb\": container with ID starting with 46238ed837d0eef475008679380016865ea47dbe19adbb4fa48f11493fc145bb not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.447723 5136 scope.go:117] "RemoveContainer" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.448074 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c\": container with ID starting with 70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c not found: ID does not exist" containerID="70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.448099 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c"} err="failed to get container status \"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c\": rpc error: code = NotFound desc = could not find container \"70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c\": container with ID starting with 70798d2130ec02fe1e87b09e3e6f0c61b8831e3d853bce3d884365eb07f74f1c not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.448114 5136 scope.go:117] "RemoveContainer" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.466466 5136 scope.go:117] "RemoveContainer" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.481546 5136 scope.go:117] "RemoveContainer" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.503971 5136 scope.go:117] "RemoveContainer" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.585576 5136 scope.go:117] "RemoveContainer" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.609451 5136 scope.go:117] "RemoveContainer" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.629680 5136 scope.go:117] "RemoveContainer" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.645552 5136 scope.go:117] "RemoveContainer" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.661432 5136 scope.go:117] "RemoveContainer" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.676307 5136 scope.go:117] "RemoveContainer" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.692939 5136 scope.go:117] "RemoveContainer" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.706974 5136 scope.go:117] "RemoveContainer" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.725195 5136 scope.go:117] "RemoveContainer" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.740554 5136 scope.go:117] "RemoveContainer" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.756183 5136 scope.go:117] "RemoveContainer" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.776877 5136 scope.go:117] "RemoveContainer" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.777368 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a\": container with ID starting with f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a not found: ID does not exist" containerID="f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777412 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a"} err="failed to get container status \"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a\": rpc error: code = NotFound desc = could not find container \"f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a\": container with ID starting with f70c7805bed8025d369139af1fcef9a0696163fd95d836d75eb51737f5c49f2a not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777446 5136 scope.go:117] "RemoveContainer" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.777867 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949\": container with ID starting with 32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949 not found: ID does not exist" containerID="32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777899 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949"} err="failed to get container status \"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949\": rpc error: code = NotFound desc = could not find container \"32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949\": container with ID starting with 32404a19502351fbc3ac4a39615a6d3615a32ac961dda04a10694ca4139bf949 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.777921 5136 scope.go:117] "RemoveContainer" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.778189 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0\": container with ID starting with 34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0 not found: ID does not exist" containerID="34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778219 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0"} err="failed to get container status \"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0\": rpc error: code = NotFound desc = could not find container \"34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0\": container with ID starting with 34db5f0f26eb4ddebb1606a5b4d86660e0c87c646601cd77837816ebb46b09a0 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778241 5136 scope.go:117] "RemoveContainer" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.778547 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c\": container with ID starting with 81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c not found: ID does not exist" containerID="81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778573 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c"} err="failed to get container status \"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c\": rpc error: code = NotFound desc = could not find container \"81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c\": container with ID starting with 81b02283172458a2bdc8e165bbbbeb7ea96a03cbbe39b13a588b28c223890c2c not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778588 5136 scope.go:117] "RemoveContainer" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.778828 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532\": container with ID starting with 1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532 not found: ID does not exist" containerID="1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778871 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532"} err="failed to get container status \"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532\": rpc error: code = NotFound desc = could not find container \"1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532\": container with ID starting with 1b16b20dbd8d4539f67032ae3d013ca240799b991d0700b6f5acd362ea9ea532 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.778885 5136 scope.go:117] "RemoveContainer" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.779258 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786\": container with ID starting with 9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786 not found: ID does not exist" containerID="9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779280 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786"} err="failed to get container status \"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786\": rpc error: code = NotFound desc = could not find container \"9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786\": container with ID starting with 9683c79e385aab29220bb43ba7f0e910b9a5950fe8b31997f17e20626e70a786 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779295 5136 scope.go:117] "RemoveContainer" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.779540 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43\": container with ID starting with 09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43 not found: ID does not exist" containerID="09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779564 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43"} err="failed to get container status \"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43\": rpc error: code = NotFound desc = could not find container \"09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43\": container with ID starting with 09326f1cbbdb488bd8aa1fff37c4c2948117647badc40e244ff6f4905e759d43 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779576 5136 scope.go:117] "RemoveContainer" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.779787 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67\": container with ID starting with e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67 not found: ID does not exist" containerID="e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779835 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67"} err="failed to get container status \"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67\": rpc error: code = NotFound desc = could not find container \"e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67\": container with ID starting with e8ad73e1a0dc1afcb0ef9f187b10091e6f56e3fa6f0fda3383c1a9da5f7aff67 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.779855 5136 scope.go:117] "RemoveContainer" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.780164 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6\": container with ID starting with cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6 not found: ID does not exist" containerID="cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780190 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6"} err="failed to get container status \"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6\": rpc error: code = NotFound desc = could not find container \"cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6\": container with ID starting with cb5954574bc442ce46a1c3ab46d37e5df7b891264fc62e27a9808855f5912da6 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780205 5136 scope.go:117] "RemoveContainer" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.780568 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3\": container with ID starting with 8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3 not found: ID does not exist" containerID="8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780607 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3"} err="failed to get container status \"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3\": rpc error: code = NotFound desc = could not find container \"8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3\": container with ID starting with 8dbe0818cd9fc5f247918e143232a8d1b8d59f91b26bb44407c84cb8783ff2a3 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.780647 5136 scope.go:117] "RemoveContainer" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.780995 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3\": container with ID starting with 2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3 not found: ID does not exist" containerID="2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781014 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3"} err="failed to get container status \"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3\": rpc error: code = NotFound desc = could not find container \"2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3\": container with ID starting with 2781e040da54a91c2e8f03b5afbcb3150c3df4ebd12d977152902fa44468e5f3 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781028 5136 scope.go:117] "RemoveContainer" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.781263 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346\": container with ID starting with c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346 not found: ID does not exist" containerID="c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781292 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346"} err="failed to get container status \"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346\": rpc error: code = NotFound desc = could not find container \"c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346\": container with ID starting with c9f8c6f25f2067d893987b366765d078076a7f16dd40a56558cdc7ca6436f346 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781311 5136 scope.go:117] "RemoveContainer" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.781643 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539\": container with ID starting with ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539 not found: ID does not exist" containerID="ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781663 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539"} err="failed to get container status \"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539\": rpc error: code = NotFound desc = could not find container \"ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539\": container with ID starting with ece61fe1418a6ebf3f8719105d88f8a8d234c50f91addf3cf7652758e2238539 not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781677 5136 scope.go:117] "RemoveContainer" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.781903 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa\": container with ID starting with 2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa not found: ID does not exist" containerID="2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781922 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa"} err="failed to get container status \"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa\": rpc error: code = NotFound desc = could not find container \"2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa\": container with ID starting with 2a0075a45d95589caf98b737d491a438c7cb43e5b9c36c7d07ea1aebe37bd4aa not found: ID does not exist" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.781936 5136 scope.go:117] "RemoveContainer" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" Mar 20 07:16:19 crc kubenswrapper[5136]: E0320 07:16:19.782164 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d\": container with ID starting with 83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d not found: ID does not exist" containerID="83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d" Mar 20 07:16:19 crc kubenswrapper[5136]: I0320 07:16:19.782196 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d"} err="failed to get container status \"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d\": rpc error: code = NotFound desc = could not find container \"83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d\": container with ID starting with 83949dae8602e569fb150aeab8e9d2eb9613a5621a13c04058efa4f9dd9f300d not found: ID does not exist" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.404353 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" path="/var/lib/kubelet/pods/31adef78-59fe-4327-9586-0c12177c7bb7/volumes" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.405874 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" path="/var/lib/kubelet/pods/5f48f721-42c9-4f2b-a461-2ad47a1dea3d/volumes" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.406552 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" path="/var/lib/kubelet/pods/dd944fb6-1517-4f5b-b579-79d8f1f3da19/volumes" Mar 20 07:16:20 crc kubenswrapper[5136]: I0320 07:16:20.541994 5136 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pod8b1461d1-f963-40b0-8cad-a5b2735eedcc"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pod8b1461d1-f963-40b0-8cad-a5b2735eedcc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8b1461d1_f963_40b0_8cad_a5b2735eedcc.slice" Mar 20 07:16:20 crc kubenswrapper[5136]: E0320 07:16:20.542046 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods besteffort pod8b1461d1-f963-40b0-8cad-a5b2735eedcc] : unable to destroy cgroup paths for cgroup [kubepods besteffort pod8b1461d1-f963-40b0-8cad-a5b2735eedcc] : Timed out while waiting for systemd to remove kubepods-besteffort-pod8b1461d1_f963_40b0_8cad_a5b2735eedcc.slice" pod="openstack/ovsdbserver-nb-0" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" Mar 20 07:16:21 crc kubenswrapper[5136]: I0320 07:16:21.347395 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 07:16:21 crc kubenswrapper[5136]: I0320 07:16:21.377280 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:16:21 crc kubenswrapper[5136]: I0320 07:16:21.388666 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 07:16:22 crc kubenswrapper[5136]: I0320 07:16:22.404461 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1461d1-f963-40b0-8cad-a5b2735eedcc" path="/var/lib/kubelet/pods/8b1461d1-f963-40b0-8cad-a5b2735eedcc/volumes" Mar 20 07:16:30 crc kubenswrapper[5136]: I0320 07:16:30.433345 5136 scope.go:117] "RemoveContainer" containerID="a922963e448f67de5c7ef7e39ae9a8fe1051c4a0abe704c7b54dc25c09d90caa" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.323153 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.323987 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324001 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324012 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324018 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324029 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324035 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324043 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server-init" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324048 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server-init" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324058 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324063 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324073 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324079 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324088 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324093 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324128 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324134 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324145 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324151 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324161 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324167 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324180 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324185 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324193 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324198 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324208 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324222 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324229 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324238 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324243 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324254 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324260 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324281 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324291 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324296 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324307 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324313 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324322 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerName="oc" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324328 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerName="oc" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324336 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324343 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324354 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324363 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324374 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324381 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: E0320 07:16:39.324389 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324395 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324538 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324553 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-api" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324565 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovs-vswitchd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324574 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324587 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" containerName="oc" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324599 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="swift-recon-cron" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324611 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324622 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324632 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c52887a-70a8-4d00-a1f9-a5677fa48d1f" containerName="neutron-httpd" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324647 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324659 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="probe" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324674 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="31adef78-59fe-4327-9586-0c12177c7bb7" containerName="cinder-scheduler" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324685 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f48f721-42c9-4f2b-a461-2ad47a1dea3d" containerName="ovsdb-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324696 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="container-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324705 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-auditor" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324717 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-reaper" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324728 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-updater" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324739 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324752 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-replicator" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324763 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="rsync" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324775 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d2085e7-db7e-4655-965c-027d03e474e0" containerName="mariadb-account-create-update" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324784 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="object-expirer" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.324791 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd944fb6-1517-4f5b-b579-79d8f1f3da19" containerName="account-server" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.329728 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.337728 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.416367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.416538 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.416600 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.517605 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.517675 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.517706 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.518135 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.518230 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.541801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"redhat-marketplace-n67v6\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:39 crc kubenswrapper[5136]: I0320 07:16:39.650187 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.078542 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.494257 5136 generic.go:334] "Generic (PLEG): container finished" podID="ede5c080-8aac-453b-8d12-89d54e561a16" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" exitCode=0 Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.494309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee"} Mar 20 07:16:40 crc kubenswrapper[5136]: I0320 07:16:40.494559 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerStarted","Data":"c663be635c978fe0eadab9caed934264165c0da1036ee4ab855b93eaa6e26937"} Mar 20 07:16:41 crc kubenswrapper[5136]: I0320 07:16:41.503298 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerStarted","Data":"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721"} Mar 20 07:16:42 crc kubenswrapper[5136]: I0320 07:16:42.513352 5136 generic.go:334] "Generic (PLEG): container finished" podID="ede5c080-8aac-453b-8d12-89d54e561a16" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" exitCode=0 Mar 20 07:16:42 crc kubenswrapper[5136]: I0320 07:16:42.513433 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721"} Mar 20 07:16:43 crc kubenswrapper[5136]: I0320 07:16:43.524477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerStarted","Data":"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c"} Mar 20 07:16:43 crc kubenswrapper[5136]: I0320 07:16:43.550840 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-n67v6" podStartSLOduration=1.920277851 podStartE2EDuration="4.550800825s" podCreationTimestamp="2026-03-20 07:16:39 +0000 UTC" firstStartedPulling="2026-03-20 07:16:40.495679256 +0000 UTC m=+1632.754990407" lastFinishedPulling="2026-03-20 07:16:43.12620223 +0000 UTC m=+1635.385513381" observedRunningTime="2026-03-20 07:16:43.544965863 +0000 UTC m=+1635.804277044" watchObservedRunningTime="2026-03-20 07:16:43.550800825 +0000 UTC m=+1635.810111976" Mar 20 07:16:49 crc kubenswrapper[5136]: I0320 07:16:49.650729 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:49 crc kubenswrapper[5136]: I0320 07:16:49.651314 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:49 crc kubenswrapper[5136]: I0320 07:16:49.690765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:50 crc kubenswrapper[5136]: I0320 07:16:50.622156 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:50 crc kubenswrapper[5136]: I0320 07:16:50.670175 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:52 crc kubenswrapper[5136]: I0320 07:16:52.607923 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-n67v6" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" containerID="cri-o://eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" gracePeriod=2 Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.052418 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.204668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") pod \"ede5c080-8aac-453b-8d12-89d54e561a16\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.204727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") pod \"ede5c080-8aac-453b-8d12-89d54e561a16\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.204768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") pod \"ede5c080-8aac-453b-8d12-89d54e561a16\" (UID: \"ede5c080-8aac-453b-8d12-89d54e561a16\") " Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.206463 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities" (OuterVolumeSpecName: "utilities") pod "ede5c080-8aac-453b-8d12-89d54e561a16" (UID: "ede5c080-8aac-453b-8d12-89d54e561a16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.210465 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt" (OuterVolumeSpecName: "kube-api-access-7h2tt") pod "ede5c080-8aac-453b-8d12-89d54e561a16" (UID: "ede5c080-8aac-453b-8d12-89d54e561a16"). InnerVolumeSpecName "kube-api-access-7h2tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.266016 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ede5c080-8aac-453b-8d12-89d54e561a16" (UID: "ede5c080-8aac-453b-8d12-89d54e561a16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.305844 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.305885 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7h2tt\" (UniqueName: \"kubernetes.io/projected/ede5c080-8aac-453b-8d12-89d54e561a16-kube-api-access-7h2tt\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.305896 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ede5c080-8aac-453b-8d12-89d54e561a16-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619793 5136 generic.go:334] "Generic (PLEG): container finished" podID="ede5c080-8aac-453b-8d12-89d54e561a16" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" exitCode=0 Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619902 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c"} Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619940 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-n67v6" event={"ID":"ede5c080-8aac-453b-8d12-89d54e561a16","Type":"ContainerDied","Data":"c663be635c978fe0eadab9caed934264165c0da1036ee4ab855b93eaa6e26937"} Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.619967 5136 scope.go:117] "RemoveContainer" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.620097 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-n67v6" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.658660 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.664960 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-n67v6"] Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.675400 5136 scope.go:117] "RemoveContainer" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.704305 5136 scope.go:117] "RemoveContainer" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.723954 5136 scope.go:117] "RemoveContainer" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" Mar 20 07:16:53 crc kubenswrapper[5136]: E0320 07:16:53.724490 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c\": container with ID starting with eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c not found: ID does not exist" containerID="eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.724539 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c"} err="failed to get container status \"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c\": rpc error: code = NotFound desc = could not find container \"eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c\": container with ID starting with eb4c575001fb46f632a9605a78c81dcdb6bfcefd27dc4b84e6fe9f4bfa3e1b4c not found: ID does not exist" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.724570 5136 scope.go:117] "RemoveContainer" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" Mar 20 07:16:53 crc kubenswrapper[5136]: E0320 07:16:53.724976 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721\": container with ID starting with 7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721 not found: ID does not exist" containerID="7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.725012 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721"} err="failed to get container status \"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721\": rpc error: code = NotFound desc = could not find container \"7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721\": container with ID starting with 7a7e931ed6013843cdaeac113a4bf6321aa7e61ae145015e56f5d6da9866a721 not found: ID does not exist" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.725041 5136 scope.go:117] "RemoveContainer" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" Mar 20 07:16:53 crc kubenswrapper[5136]: E0320 07:16:53.725399 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee\": container with ID starting with e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee not found: ID does not exist" containerID="e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee" Mar 20 07:16:53 crc kubenswrapper[5136]: I0320 07:16:53.725466 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee"} err="failed to get container status \"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee\": rpc error: code = NotFound desc = could not find container \"e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee\": container with ID starting with e2c2898dab0189b60a22fc7d710d4ba3b907b46feed5284fadb24d702a7ee2ee not found: ID does not exist" Mar 20 07:16:54 crc kubenswrapper[5136]: I0320 07:16:54.414099 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" path="/var/lib/kubelet/pods/ede5c080-8aac-453b-8d12-89d54e561a16/volumes" Mar 20 07:17:30 crc kubenswrapper[5136]: I0320 07:17:30.996840 5136 scope.go:117] "RemoveContainer" containerID="7f81f78f97fc5d48f48b6354b794c050f707e5b35fc6d46c7df2de9e4878960b" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.028451 5136 scope.go:117] "RemoveContainer" containerID="34ee2cccbe30631969d3aa93a1b8264849d8d5334e0c97572f21e0a6e95e8e26" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.059030 5136 scope.go:117] "RemoveContainer" containerID="933fdd395d96426dd2696ed053dd4cefada8c95df3be0a52f3cc68ad68f9aebb" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.084000 5136 scope.go:117] "RemoveContainer" containerID="bd48417c8a8842903b86c0b0297625af601775c012346c6e4a42ced3c9d81a5c" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.107793 5136 scope.go:117] "RemoveContainer" containerID="c83952221ac9ae15d237b01aa417d2a8651bd6786c0034250cebe0e17be31690" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.129202 5136 scope.go:117] "RemoveContainer" containerID="2e4cee4a85209760afcb1fc4e1920e495e69a4a4c4fbdedacaa3ff6869eb619f" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.152942 5136 scope.go:117] "RemoveContainer" containerID="f8f2b333bca19081fee1627c5e046485a6793b7781e892f02c6a8b08ca392e57" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.176327 5136 scope.go:117] "RemoveContainer" containerID="dc6f042f4a1f3f8ba50fa65cef930cd8040f1e880b0843b1b3beecf9065681fb" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.205905 5136 scope.go:117] "RemoveContainer" containerID="ca4d6aff6fa4147c69ade98576093b5726d3ffc5a53c4a7f48a1261885cf9eaf" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.226143 5136 scope.go:117] "RemoveContainer" containerID="df74ea59bf43247509097578b0b44714fcb954b2204d1d34decc8550e92f3f6e" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.245564 5136 scope.go:117] "RemoveContainer" containerID="55e70c80be714d08791bbb875a2885eb056808546361bafa1ce59b4a2b4afd94" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.261654 5136 scope.go:117] "RemoveContainer" containerID="7056c10c02d573c52be9cb6646cfd2016f281214c76d5613dade95a4d450b824" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.290218 5136 scope.go:117] "RemoveContainer" containerID="61abc8440208cd19caa61d866cd42cc249d0d527cfebb488be887ccce4bdea72" Mar 20 07:17:31 crc kubenswrapper[5136]: I0320 07:17:31.306572 5136 scope.go:117] "RemoveContainer" containerID="0a42176f6839fd2b1fa46f8a90c2d73b4c4eaa11385cb9c81bf9e24e01ecf323" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.150869 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:18:00 crc kubenswrapper[5136]: E0320 07:18:00.151921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-utilities" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.151950 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-utilities" Mar 20 07:18:00 crc kubenswrapper[5136]: E0320 07:18:00.151998 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-content" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.152017 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="extract-content" Mar 20 07:18:00 crc kubenswrapper[5136]: E0320 07:18:00.152070 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.152087 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.152346 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ede5c080-8aac-453b-8d12-89d54e561a16" containerName="registry-server" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.153109 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.156508 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.156693 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.156716 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.169484 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.346865 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"auto-csr-approver-29566518-mjsfh\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.448070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"auto-csr-approver-29566518-mjsfh\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.465303 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"auto-csr-approver-29566518-mjsfh\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.477892 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:00 crc kubenswrapper[5136]: I0320 07:18:00.881085 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:18:01 crc kubenswrapper[5136]: I0320 07:18:01.224306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerStarted","Data":"65b618d7fd02f04a380bdd119d1b6cb7996987df0d62f6e11968a52998757989"} Mar 20 07:18:03 crc kubenswrapper[5136]: I0320 07:18:03.241457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerStarted","Data":"7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e"} Mar 20 07:18:03 crc kubenswrapper[5136]: I0320 07:18:03.264048 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" podStartSLOduration=1.237537108 podStartE2EDuration="3.26402975s" podCreationTimestamp="2026-03-20 07:18:00 +0000 UTC" firstStartedPulling="2026-03-20 07:18:00.893473126 +0000 UTC m=+1713.152784287" lastFinishedPulling="2026-03-20 07:18:02.919965758 +0000 UTC m=+1715.179276929" observedRunningTime="2026-03-20 07:18:03.260991055 +0000 UTC m=+1715.520302196" watchObservedRunningTime="2026-03-20 07:18:03.26402975 +0000 UTC m=+1715.523340901" Mar 20 07:18:04 crc kubenswrapper[5136]: I0320 07:18:04.252732 5136 generic.go:334] "Generic (PLEG): container finished" podID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerID="7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e" exitCode=0 Mar 20 07:18:04 crc kubenswrapper[5136]: I0320 07:18:04.252884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerDied","Data":"7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e"} Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.591306 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.757301 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") pod \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\" (UID: \"6e858127-6d5f-4dcd-828c-a6f7b892c4dc\") " Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.764985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv" (OuterVolumeSpecName: "kube-api-access-rvcfv") pod "6e858127-6d5f-4dcd-828c-a6f7b892c4dc" (UID: "6e858127-6d5f-4dcd-828c-a6f7b892c4dc"). InnerVolumeSpecName "kube-api-access-rvcfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:18:05 crc kubenswrapper[5136]: I0320 07:18:05.859571 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvcfv\" (UniqueName: \"kubernetes.io/projected/6e858127-6d5f-4dcd-828c-a6f7b892c4dc-kube-api-access-rvcfv\") on node \"crc\" DevicePath \"\"" Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.276618 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" event={"ID":"6e858127-6d5f-4dcd-828c-a6f7b892c4dc","Type":"ContainerDied","Data":"65b618d7fd02f04a380bdd119d1b6cb7996987df0d62f6e11968a52998757989"} Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.276694 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b618d7fd02f04a380bdd119d1b6cb7996987df0d62f6e11968a52998757989" Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.276799 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566518-mjsfh" Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.341737 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.348747 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566512-lrvjf"] Mar 20 07:18:06 crc kubenswrapper[5136]: I0320 07:18:06.405238 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ace6934-986e-463e-8e10-ea2d38d8657b" path="/var/lib/kubelet/pods/4ace6934-986e-463e-8e10-ea2d38d8657b/volumes" Mar 20 07:18:15 crc kubenswrapper[5136]: I0320 07:18:15.822247 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:18:15 crc kubenswrapper[5136]: I0320 07:18:15.822948 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.525528 5136 scope.go:117] "RemoveContainer" containerID="031d15e3c6d48fb60bf7992b603ae52f0ad57d2692789532d8ff3e43150b8a62" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.574361 5136 scope.go:117] "RemoveContainer" containerID="0ed02eb432d6f42e0d9bf84365b12025d2b0ecfccb688b075f04ab7b6e93a89d" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.597990 5136 scope.go:117] "RemoveContainer" containerID="27458c6d0396483e2bf32a7b77f963fd7b5299335805aa8a1978233da54516f3" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.644788 5136 scope.go:117] "RemoveContainer" containerID="624feab47793180e2f843804104146a4e3de4528636c1ebc9f47f7172993b072" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.690781 5136 scope.go:117] "RemoveContainer" containerID="52c9595f9d03cfa1e4df7232d34e2bf01954bbb2d3d7f55b6c4baddaa2f4853a" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.728315 5136 scope.go:117] "RemoveContainer" containerID="47ae9136918142f0659195583b1d45f1b8d098ff54fd4db577e632c9d504d4ec" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.756151 5136 scope.go:117] "RemoveContainer" containerID="80afd4ebec7d57a2a5f4e5804fe0cafa6290530e8266af5fe943abb82f8b0a3e" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.781358 5136 scope.go:117] "RemoveContainer" containerID="152cfd50a682e083fe5fcb83f9e826724106ecbcb51dbd391cff3907a957fa98" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.803466 5136 scope.go:117] "RemoveContainer" containerID="be9df9297d087d9b583ba3c8a236fca6fd4fd729e25496c50522e980d7021c09" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.836973 5136 scope.go:117] "RemoveContainer" containerID="5e947f339491ac05ba12abc9cb95630dcf48840148917141c549dbda5ca4a25f" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.852941 5136 scope.go:117] "RemoveContainer" containerID="22bd79d8d32272633a42a92ee1e9e96d3d3259073a33ca0ea587ca787429e836" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.884499 5136 scope.go:117] "RemoveContainer" containerID="dd50ea3e8d708d6e3b7b256a2ea07c9211cc4921a494661346061b12daf9a3f3" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.913228 5136 scope.go:117] "RemoveContainer" containerID="ca34610c300fb63b0b8b7fa75b8b5e36ec0f7e9d15dbda229381348a1e3e55be" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.938306 5136 scope.go:117] "RemoveContainer" containerID="685537caeff80758998e736f40d87da6358ae395ce8425cb44887ce77751a0c9" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.954054 5136 scope.go:117] "RemoveContainer" containerID="32a4b8b42d71b772e9ef90a830d8bb2691b008e79e6ac5eedc1a261ab6fb23b2" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.968078 5136 scope.go:117] "RemoveContainer" containerID="8f39b26d5a3a98eb4a0bd3688d06d7a44225eee0ee50099fc60fd5816beb4256" Mar 20 07:18:31 crc kubenswrapper[5136]: I0320 07:18:31.986953 5136 scope.go:117] "RemoveContainer" containerID="517443469d4fcf677c53f3f830f5a94c22bc034822199c2c81fb70956a791274" Mar 20 07:18:45 crc kubenswrapper[5136]: I0320 07:18:45.822104 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:18:45 crc kubenswrapper[5136]: I0320 07:18:45.822663 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.822586 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.823282 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.823390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.824312 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:19:15 crc kubenswrapper[5136]: I0320 07:19:15.824421 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" gracePeriod=600 Mar 20 07:19:15 crc kubenswrapper[5136]: E0320 07:19:15.952436 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.933090 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" exitCode=0 Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.933149 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d"} Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.933484 5136 scope.go:117] "RemoveContainer" containerID="dd4323cb06cbe9a996dc58d915178240fb92871ebdc9b015588397e6f7268db6" Mar 20 07:19:16 crc kubenswrapper[5136]: I0320 07:19:16.934132 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:16 crc kubenswrapper[5136]: E0320 07:19:16.934583 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:30 crc kubenswrapper[5136]: I0320 07:19:30.396712 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:30 crc kubenswrapper[5136]: E0320 07:19:30.397597 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.215576 5136 scope.go:117] "RemoveContainer" containerID="6963baa6fe7d9db38870a70531888cfee8f7d44c3eff1597da33cf867ee591c8" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.246454 5136 scope.go:117] "RemoveContainer" containerID="89325ee63cd0d5963c16a3cd15b18e01966cac4c73b616f8222bb05ec0a94fbe" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.297682 5136 scope.go:117] "RemoveContainer" containerID="14f94b6d1dd07b874e83aed25b1716c42ede7203afe8fc38064921b976f5c65d" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.334739 5136 scope.go:117] "RemoveContainer" containerID="01aa356b57e965220f79e7a24da86937ea014054be6bb673baf18c8bb2471582" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.351908 5136 scope.go:117] "RemoveContainer" containerID="3e0d0bab07ba893f2ec5b9f186f6e1ac58691443de33a6064347527effa3dc1f" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.368685 5136 scope.go:117] "RemoveContainer" containerID="9cff72e5160a41a8305e76e0221624a76437830286641f5e18a9ed4e7ae3e23a" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.438315 5136 scope.go:117] "RemoveContainer" containerID="76ecc1efaf0109117b2021b8dc8f89423ec738c34b9f16b5ea8ada8e167cdf99" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.465124 5136 scope.go:117] "RemoveContainer" containerID="22e326ee5e74b9ee2e3ad6076eac75a725689f97883ae8bb80d1de284edb7a74" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.485351 5136 scope.go:117] "RemoveContainer" containerID="0d231656eec1735b1a5bc9e9719bbfb1dc2f5b357bdf072349808ab12f944278" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.503346 5136 scope.go:117] "RemoveContainer" containerID="605e2f1b6fdab04852864ae8ba9a1933cc6fbe478b172080fd10f5d23b52f0fe" Mar 20 07:19:32 crc kubenswrapper[5136]: I0320 07:19:32.543272 5136 scope.go:117] "RemoveContainer" containerID="edc5e28eb62af197edd849dc06e38cdd2bebac736971174a120fd4afd95e52b2" Mar 20 07:19:43 crc kubenswrapper[5136]: I0320 07:19:43.396508 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:43 crc kubenswrapper[5136]: E0320 07:19:43.397188 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:19:58 crc kubenswrapper[5136]: I0320 07:19:58.406052 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:19:58 crc kubenswrapper[5136]: E0320 07:19:58.406900 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.137615 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:20:00 crc kubenswrapper[5136]: E0320 07:20:00.137961 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerName="oc" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.137978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerName="oc" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.138150 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" containerName="oc" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.138677 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.141598 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.141680 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.141912 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.144866 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.327237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"auto-csr-approver-29566520-gp87b\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.428951 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"auto-csr-approver-29566520-gp87b\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.461858 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"auto-csr-approver-29566520-gp87b\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.479168 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:00 crc kubenswrapper[5136]: I0320 07:20:00.718119 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:20:01 crc kubenswrapper[5136]: I0320 07:20:01.311186 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerStarted","Data":"271f92a3a1e2dadc93f97495f414e36c343b5e5ecbb24481158f24020bc36405"} Mar 20 07:20:02 crc kubenswrapper[5136]: I0320 07:20:02.319757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerStarted","Data":"cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f"} Mar 20 07:20:02 crc kubenswrapper[5136]: I0320 07:20:02.341900 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566520-gp87b" podStartSLOduration=1.159958189 podStartE2EDuration="2.341873929s" podCreationTimestamp="2026-03-20 07:20:00 +0000 UTC" firstStartedPulling="2026-03-20 07:20:00.72711392 +0000 UTC m=+1832.986425081" lastFinishedPulling="2026-03-20 07:20:01.90902966 +0000 UTC m=+1834.168340821" observedRunningTime="2026-03-20 07:20:02.332500548 +0000 UTC m=+1834.591811739" watchObservedRunningTime="2026-03-20 07:20:02.341873929 +0000 UTC m=+1834.601185090" Mar 20 07:20:03 crc kubenswrapper[5136]: I0320 07:20:03.334039 5136 generic.go:334] "Generic (PLEG): container finished" podID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerID="cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f" exitCode=0 Mar 20 07:20:03 crc kubenswrapper[5136]: I0320 07:20:03.334082 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerDied","Data":"cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f"} Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.652197 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.795838 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") pod \"1114e255-4c25-4a30-88fb-4393c90a6d27\" (UID: \"1114e255-4c25-4a30-88fb-4393c90a6d27\") " Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.803300 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr" (OuterVolumeSpecName: "kube-api-access-zjsmr") pod "1114e255-4c25-4a30-88fb-4393c90a6d27" (UID: "1114e255-4c25-4a30-88fb-4393c90a6d27"). InnerVolumeSpecName "kube-api-access-zjsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:20:04 crc kubenswrapper[5136]: I0320 07:20:04.897998 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zjsmr\" (UniqueName: \"kubernetes.io/projected/1114e255-4c25-4a30-88fb-4393c90a6d27-kube-api-access-zjsmr\") on node \"crc\" DevicePath \"\"" Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.354588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566520-gp87b" event={"ID":"1114e255-4c25-4a30-88fb-4393c90a6d27","Type":"ContainerDied","Data":"271f92a3a1e2dadc93f97495f414e36c343b5e5ecbb24481158f24020bc36405"} Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.354949 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="271f92a3a1e2dadc93f97495f414e36c343b5e5ecbb24481158f24020bc36405" Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.354730 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566520-gp87b" Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.393279 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:20:05 crc kubenswrapper[5136]: I0320 07:20:05.398351 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566514-w8pvt"] Mar 20 07:20:06 crc kubenswrapper[5136]: I0320 07:20:06.405286 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f034b011-ac81-4ef1-aa8b-39164a6c98ee" path="/var/lib/kubelet/pods/f034b011-ac81-4ef1-aa8b-39164a6c98ee/volumes" Mar 20 07:20:10 crc kubenswrapper[5136]: I0320 07:20:10.396562 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:10 crc kubenswrapper[5136]: E0320 07:20:10.397304 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:22 crc kubenswrapper[5136]: I0320 07:20:22.397120 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:22 crc kubenswrapper[5136]: E0320 07:20:22.399343 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.666554 5136 scope.go:117] "RemoveContainer" containerID="27c023c35669b2ba848d9e65c7d0898f10c49020818f9d3f19dccfc3afaa8e46" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.698196 5136 scope.go:117] "RemoveContainer" containerID="0b0d8b9ccdc0ce4c2fbd89f7f74e2b08044fdede201e9fe4c0352ac82e9375a6" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.732151 5136 scope.go:117] "RemoveContainer" containerID="2b776c776ff30149306adf0b0b9812edd3bba700f5823572b3aaed091e1cf528" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.746957 5136 scope.go:117] "RemoveContainer" containerID="57f37d9f2fcaf7ecb1593abeb0dac4f77898bf18e2e7d2992aaecaca2cb60ac9" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.769782 5136 scope.go:117] "RemoveContainer" containerID="5b986819d9a90e1b85c9700743fca946f3bc072f13b32ee80b0ccb0986a0c382" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.783661 5136 scope.go:117] "RemoveContainer" containerID="d40caae4293d07d1be57a3b0fe6a0c2358da7d3ca34831236dc80ae177c4c105" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.818755 5136 scope.go:117] "RemoveContainer" containerID="f7fabe81352183708a864623d8602cc2d8ea0c1282e771482eaebc4b633a5e02" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.835092 5136 scope.go:117] "RemoveContainer" containerID="a6ef812d133f600bf2d930bb51a86bd9525704f16b2a680bf46ad2719737b44f" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.871235 5136 scope.go:117] "RemoveContainer" containerID="4af2afde3b60e503cf744acf4fb08477b7ec46cb1b30cfb589608690a2df8849" Mar 20 07:20:32 crc kubenswrapper[5136]: I0320 07:20:32.886606 5136 scope.go:117] "RemoveContainer" containerID="d1609ae90ac31423489405692434f7f762e8aa11262621b19e053461b1226222" Mar 20 07:20:35 crc kubenswrapper[5136]: I0320 07:20:35.396554 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:35 crc kubenswrapper[5136]: E0320 07:20:35.397282 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:20:46 crc kubenswrapper[5136]: I0320 07:20:46.397126 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:20:46 crc kubenswrapper[5136]: E0320 07:20:46.397856 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:00 crc kubenswrapper[5136]: I0320 07:21:00.397288 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:00 crc kubenswrapper[5136]: E0320 07:21:00.398123 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:11 crc kubenswrapper[5136]: I0320 07:21:11.397120 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:11 crc kubenswrapper[5136]: E0320 07:21:11.398027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:23 crc kubenswrapper[5136]: I0320 07:21:23.396541 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:23 crc kubenswrapper[5136]: E0320 07:21:23.397256 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:33 crc kubenswrapper[5136]: I0320 07:21:33.025960 5136 scope.go:117] "RemoveContainer" containerID="cc5a54a6935dd6e523205b586479d84179624ba24df417c663b90589e6d2673f" Mar 20 07:21:33 crc kubenswrapper[5136]: I0320 07:21:33.058848 5136 scope.go:117] "RemoveContainer" containerID="a35e3106c44fc687668b1f5ba46d5a5060fdc5acc5e49f69f4dc88d5ef142f17" Mar 20 07:21:33 crc kubenswrapper[5136]: I0320 07:21:33.127194 5136 scope.go:117] "RemoveContainer" containerID="ccb4f9c0c6dc989c486c61d0a17af6a9e3438c25ae843380545c453141823051" Mar 20 07:21:37 crc kubenswrapper[5136]: I0320 07:21:37.397480 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:37 crc kubenswrapper[5136]: E0320 07:21:37.398410 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.837395 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:41 crc kubenswrapper[5136]: E0320 07:21:41.838635 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerName="oc" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.838650 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerName="oc" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.840722 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" containerName="oc" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.842414 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.859353 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.931913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.931969 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:41 crc kubenswrapper[5136]: I0320 07:21:41.932018 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.033753 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.033838 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.033901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.034445 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.034679 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.066623 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"community-operators-7sjkx\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.180891 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:42 crc kubenswrapper[5136]: I0320 07:21:42.652402 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.214669 5136 generic.go:334] "Generic (PLEG): container finished" podID="f1234585-e4eb-4797-ae7f-037d1124570e" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" exitCode=0 Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.214732 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4"} Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.215007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerStarted","Data":"bd8112d3153d61bdcad61a83c70d59f9a21ae4fb6be5303b783fda8fe212c04b"} Mar 20 07:21:43 crc kubenswrapper[5136]: I0320 07:21:43.216881 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.242325 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.244730 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.274585 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.365915 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.366395 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.366545 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.467401 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.467768 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.467795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.468306 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.468611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.501495 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"certified-operators-fsxr9\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:44 crc kubenswrapper[5136]: I0320 07:21:44.581863 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.071512 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:45 crc kubenswrapper[5136]: W0320 07:21:45.073210 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode64dddc6_3a07_405d_89ab_3c1a65fc7e40.slice/crio-50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931 WatchSource:0}: Error finding container 50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931: Status 404 returned error can't find the container with id 50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931 Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.263910 5136 generic.go:334] "Generic (PLEG): container finished" podID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" exitCode=0 Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.263978 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771"} Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.266153 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerStarted","Data":"50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931"} Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.268890 5136 generic.go:334] "Generic (PLEG): container finished" podID="f1234585-e4eb-4797-ae7f-037d1124570e" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" exitCode=0 Mar 20 07:21:45 crc kubenswrapper[5136]: I0320 07:21:45.268928 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac"} Mar 20 07:21:46 crc kubenswrapper[5136]: I0320 07:21:46.279837 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerStarted","Data":"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d"} Mar 20 07:21:46 crc kubenswrapper[5136]: I0320 07:21:46.309966 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sjkx" podStartSLOduration=2.829420921 podStartE2EDuration="5.309937959s" podCreationTimestamp="2026-03-20 07:21:41 +0000 UTC" firstStartedPulling="2026-03-20 07:21:43.216600625 +0000 UTC m=+1935.475911776" lastFinishedPulling="2026-03-20 07:21:45.697117653 +0000 UTC m=+1937.956428814" observedRunningTime="2026-03-20 07:21:46.300659492 +0000 UTC m=+1938.559970653" watchObservedRunningTime="2026-03-20 07:21:46.309937959 +0000 UTC m=+1938.569249150" Mar 20 07:21:47 crc kubenswrapper[5136]: I0320 07:21:47.288849 5136 generic.go:334] "Generic (PLEG): container finished" podID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" exitCode=0 Mar 20 07:21:47 crc kubenswrapper[5136]: I0320 07:21:47.288894 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084"} Mar 20 07:21:49 crc kubenswrapper[5136]: I0320 07:21:49.304068 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerStarted","Data":"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f"} Mar 20 07:21:49 crc kubenswrapper[5136]: I0320 07:21:49.323242 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fsxr9" podStartSLOduration=1.5444437 podStartE2EDuration="5.323223137s" podCreationTimestamp="2026-03-20 07:21:44 +0000 UTC" firstStartedPulling="2026-03-20 07:21:45.265719149 +0000 UTC m=+1937.525030300" lastFinishedPulling="2026-03-20 07:21:49.044498586 +0000 UTC m=+1941.303809737" observedRunningTime="2026-03-20 07:21:49.320664348 +0000 UTC m=+1941.579975499" watchObservedRunningTime="2026-03-20 07:21:49.323223137 +0000 UTC m=+1941.582534298" Mar 20 07:21:51 crc kubenswrapper[5136]: I0320 07:21:51.397690 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:21:51 crc kubenswrapper[5136]: E0320 07:21:51.398291 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.182424 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.183089 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.238593 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:52 crc kubenswrapper[5136]: I0320 07:21:52.378392 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.583050 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.583146 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.634078 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.634477 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sjkx" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" containerID="cri-o://a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" gracePeriod=2 Mar 20 07:21:54 crc kubenswrapper[5136]: I0320 07:21:54.639437 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.079345 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.228379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") pod \"f1234585-e4eb-4797-ae7f-037d1124570e\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.228508 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") pod \"f1234585-e4eb-4797-ae7f-037d1124570e\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.228555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") pod \"f1234585-e4eb-4797-ae7f-037d1124570e\" (UID: \"f1234585-e4eb-4797-ae7f-037d1124570e\") " Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.229801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities" (OuterVolumeSpecName: "utilities") pod "f1234585-e4eb-4797-ae7f-037d1124570e" (UID: "f1234585-e4eb-4797-ae7f-037d1124570e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.236463 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp" (OuterVolumeSpecName: "kube-api-access-qz7pp") pod "f1234585-e4eb-4797-ae7f-037d1124570e" (UID: "f1234585-e4eb-4797-ae7f-037d1124570e"). InnerVolumeSpecName "kube-api-access-qz7pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.330751 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz7pp\" (UniqueName: \"kubernetes.io/projected/f1234585-e4eb-4797-ae7f-037d1124570e-kube-api-access-qz7pp\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.330793 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355792 5136 generic.go:334] "Generic (PLEG): container finished" podID="f1234585-e4eb-4797-ae7f-037d1124570e" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" exitCode=0 Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355846 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d"} Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355890 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sjkx" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355901 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sjkx" event={"ID":"f1234585-e4eb-4797-ae7f-037d1124570e","Type":"ContainerDied","Data":"bd8112d3153d61bdcad61a83c70d59f9a21ae4fb6be5303b783fda8fe212c04b"} Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.355923 5136 scope.go:117] "RemoveContainer" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.390371 5136 scope.go:117] "RemoveContainer" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.395259 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.418137 5136 scope.go:117] "RemoveContainer" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.453865 5136 scope.go:117] "RemoveContainer" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" Mar 20 07:21:55 crc kubenswrapper[5136]: E0320 07:21:55.454519 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d\": container with ID starting with a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d not found: ID does not exist" containerID="a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.454606 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d"} err="failed to get container status \"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d\": rpc error: code = NotFound desc = could not find container \"a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d\": container with ID starting with a4e4f7d5c91a6e8e883dd68acd4deedf154928dda8858a0564c49b6cb38a4c9d not found: ID does not exist" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.454636 5136 scope.go:117] "RemoveContainer" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" Mar 20 07:21:55 crc kubenswrapper[5136]: E0320 07:21:55.455007 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac\": container with ID starting with 4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac not found: ID does not exist" containerID="4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.455051 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac"} err="failed to get container status \"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac\": rpc error: code = NotFound desc = could not find container \"4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac\": container with ID starting with 4683096035afa8f71db2df230789794b0de21e60b2b26b10014a83395863afac not found: ID does not exist" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.455076 5136 scope.go:117] "RemoveContainer" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" Mar 20 07:21:55 crc kubenswrapper[5136]: E0320 07:21:55.455411 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4\": container with ID starting with d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4 not found: ID does not exist" containerID="d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.455448 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4"} err="failed to get container status \"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4\": rpc error: code = NotFound desc = could not find container \"d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4\": container with ID starting with d7efa6515f8370e3cf4c97f0147eabb814aa835c61658d3cc3ca1b7cf8c8a3a4 not found: ID does not exist" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.550801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f1234585-e4eb-4797-ae7f-037d1124570e" (UID: "f1234585-e4eb-4797-ae7f-037d1124570e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.639218 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f1234585-e4eb-4797-ae7f-037d1124570e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.692225 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:55 crc kubenswrapper[5136]: I0320 07:21:55.700082 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sjkx"] Mar 20 07:21:56 crc kubenswrapper[5136]: I0320 07:21:56.411723 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" path="/var/lib/kubelet/pods/f1234585-e4eb-4797-ae7f-037d1124570e/volumes" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.226444 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.382301 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fsxr9" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" containerID="cri-o://97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" gracePeriod=2 Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.789588 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.877443 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") pod \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.877511 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") pod \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.877551 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") pod \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\" (UID: \"e64dddc6-3a07-405d-89ab-3c1a65fc7e40\") " Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.879738 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities" (OuterVolumeSpecName: "utilities") pod "e64dddc6-3a07-405d-89ab-3c1a65fc7e40" (UID: "e64dddc6-3a07-405d-89ab-3c1a65fc7e40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.883876 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8" (OuterVolumeSpecName: "kube-api-access-rgpz8") pod "e64dddc6-3a07-405d-89ab-3c1a65fc7e40" (UID: "e64dddc6-3a07-405d-89ab-3c1a65fc7e40"). InnerVolumeSpecName "kube-api-access-rgpz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.978590 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:57 crc kubenswrapper[5136]: I0320 07:21:57.978624 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgpz8\" (UniqueName: \"kubernetes.io/projected/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-kube-api-access-rgpz8\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.206566 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e64dddc6-3a07-405d-89ab-3c1a65fc7e40" (UID: "e64dddc6-3a07-405d-89ab-3c1a65fc7e40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.282382 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e64dddc6-3a07-405d-89ab-3c1a65fc7e40-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393146 5136 generic.go:334] "Generic (PLEG): container finished" podID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" exitCode=0 Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393218 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f"} Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393258 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fsxr9" event={"ID":"e64dddc6-3a07-405d-89ab-3c1a65fc7e40","Type":"ContainerDied","Data":"50cee77aae038fb327618c5294d6b7d171fa7f846ace17f4fa9e1299940e9931"} Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393288 5136 scope.go:117] "RemoveContainer" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.393311 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fsxr9" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.426672 5136 scope.go:117] "RemoveContainer" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.456482 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.469065 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fsxr9"] Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.473114 5136 scope.go:117] "RemoveContainer" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.498617 5136 scope.go:117] "RemoveContainer" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" Mar 20 07:21:58 crc kubenswrapper[5136]: E0320 07:21:58.499268 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f\": container with ID starting with 97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f not found: ID does not exist" containerID="97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499335 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f"} err="failed to get container status \"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f\": rpc error: code = NotFound desc = could not find container \"97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f\": container with ID starting with 97e7bd2460bf98be7c10edf76286e78ab6cf1f6cce3a04c9f4a991f2b217464f not found: ID does not exist" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499378 5136 scope.go:117] "RemoveContainer" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" Mar 20 07:21:58 crc kubenswrapper[5136]: E0320 07:21:58.499763 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084\": container with ID starting with 56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084 not found: ID does not exist" containerID="56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499802 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084"} err="failed to get container status \"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084\": rpc error: code = NotFound desc = could not find container \"56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084\": container with ID starting with 56520cd11d487f50ae1e5c860ce68a62ea017993b75f4b5262cbab53ccff5084 not found: ID does not exist" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.499849 5136 scope.go:117] "RemoveContainer" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" Mar 20 07:21:58 crc kubenswrapper[5136]: E0320 07:21:58.500454 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771\": container with ID starting with 3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771 not found: ID does not exist" containerID="3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771" Mar 20 07:21:58 crc kubenswrapper[5136]: I0320 07:21:58.500478 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771"} err="failed to get container status \"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771\": rpc error: code = NotFound desc = could not find container \"3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771\": container with ID starting with 3e345b34d281dbfe3622a535cab7d7c5028e02bb480ce509ffed097c3a59c771 not found: ID does not exist" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148056 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148597 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148609 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148619 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148625 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-utilities" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148636 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148642 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148658 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148664 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148673 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148680 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: E0320 07:22:00.148694 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148700 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="extract-content" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148849 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1234585-e4eb-4797-ae7f-037d1124570e" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.148868 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" containerName="registry-server" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.149269 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.154024 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.155148 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.155190 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.164455 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.214653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"auto-csr-approver-29566522-6qksf\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.316338 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"auto-csr-approver-29566522-6qksf\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.337447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"auto-csr-approver-29566522-6qksf\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.404663 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e64dddc6-3a07-405d-89ab-3c1a65fc7e40" path="/var/lib/kubelet/pods/e64dddc6-3a07-405d-89ab-3c1a65fc7e40/volumes" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.498447 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:00 crc kubenswrapper[5136]: I0320 07:22:00.894534 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:22:01 crc kubenswrapper[5136]: I0320 07:22:01.416632 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerStarted","Data":"afd17f0acb5bffbad866324cbf61d969f311e4fa1136dc552f5b8b946c820641"} Mar 20 07:22:02 crc kubenswrapper[5136]: I0320 07:22:02.396595 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:02 crc kubenswrapper[5136]: E0320 07:22:02.397086 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:02 crc kubenswrapper[5136]: I0320 07:22:02.424427 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerStarted","Data":"18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e"} Mar 20 07:22:02 crc kubenswrapper[5136]: I0320 07:22:02.443684 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566522-6qksf" podStartSLOduration=1.239097502 podStartE2EDuration="2.443665742s" podCreationTimestamp="2026-03-20 07:22:00 +0000 UTC" firstStartedPulling="2026-03-20 07:22:00.904648757 +0000 UTC m=+1953.163959908" lastFinishedPulling="2026-03-20 07:22:02.109216997 +0000 UTC m=+1954.368528148" observedRunningTime="2026-03-20 07:22:02.4384291 +0000 UTC m=+1954.697740261" watchObservedRunningTime="2026-03-20 07:22:02.443665742 +0000 UTC m=+1954.702976913" Mar 20 07:22:03 crc kubenswrapper[5136]: I0320 07:22:03.433604 5136 generic.go:334] "Generic (PLEG): container finished" podID="89e4d1fb-8e51-468f-877b-49847c583d53" containerID="18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e" exitCode=0 Mar 20 07:22:03 crc kubenswrapper[5136]: I0320 07:22:03.433684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerDied","Data":"18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e"} Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.727956 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.879800 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") pod \"89e4d1fb-8e51-468f-877b-49847c583d53\" (UID: \"89e4d1fb-8e51-468f-877b-49847c583d53\") " Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.889129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5" (OuterVolumeSpecName: "kube-api-access-45tp5") pod "89e4d1fb-8e51-468f-877b-49847c583d53" (UID: "89e4d1fb-8e51-468f-877b-49847c583d53"). InnerVolumeSpecName "kube-api-access-45tp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:22:04 crc kubenswrapper[5136]: I0320 07:22:04.982121 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45tp5\" (UniqueName: \"kubernetes.io/projected/89e4d1fb-8e51-468f-877b-49847c583d53-kube-api-access-45tp5\") on node \"crc\" DevicePath \"\"" Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.449364 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566522-6qksf" event={"ID":"89e4d1fb-8e51-468f-877b-49847c583d53","Type":"ContainerDied","Data":"afd17f0acb5bffbad866324cbf61d969f311e4fa1136dc552f5b8b946c820641"} Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.449408 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd17f0acb5bffbad866324cbf61d969f311e4fa1136dc552f5b8b946c820641" Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.449477 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566522-6qksf" Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.499698 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:22:05 crc kubenswrapper[5136]: I0320 07:22:05.505175 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566516-2dnr7"] Mar 20 07:22:06 crc kubenswrapper[5136]: I0320 07:22:06.408761 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a1410b1-69b7-42b6-85c9-967dbbc05b08" path="/var/lib/kubelet/pods/3a1410b1-69b7-42b6-85c9-967dbbc05b08/volumes" Mar 20 07:22:13 crc kubenswrapper[5136]: I0320 07:22:13.397394 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:13 crc kubenswrapper[5136]: E0320 07:22:13.398315 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:27 crc kubenswrapper[5136]: I0320 07:22:27.397019 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:27 crc kubenswrapper[5136]: E0320 07:22:27.398108 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:33 crc kubenswrapper[5136]: I0320 07:22:33.185028 5136 scope.go:117] "RemoveContainer" containerID="71a9b19bcf4bcf8c4a69410e7ffac0d108a4db9d76a7cd352479549f5c15e6f8" Mar 20 07:22:40 crc kubenswrapper[5136]: I0320 07:22:40.397286 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:40 crc kubenswrapper[5136]: E0320 07:22:40.398255 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:22:55 crc kubenswrapper[5136]: I0320 07:22:55.397320 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:22:55 crc kubenswrapper[5136]: E0320 07:22:55.398229 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:09 crc kubenswrapper[5136]: I0320 07:23:09.396526 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:09 crc kubenswrapper[5136]: E0320 07:23:09.397550 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:24 crc kubenswrapper[5136]: I0320 07:23:24.397021 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:24 crc kubenswrapper[5136]: E0320 07:23:24.397840 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:39 crc kubenswrapper[5136]: I0320 07:23:39.396540 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:39 crc kubenswrapper[5136]: E0320 07:23:39.398308 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:23:52 crc kubenswrapper[5136]: I0320 07:23:52.396726 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:23:52 crc kubenswrapper[5136]: E0320 07:23:52.398670 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.166232 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:24:00 crc kubenswrapper[5136]: E0320 07:24:00.169135 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" containerName="oc" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.169327 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" containerName="oc" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.169977 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" containerName="oc" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.171102 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.175162 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.177486 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.178383 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.186972 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.218611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"auto-csr-approver-29566524-rfjd7\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.320056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"auto-csr-approver-29566524-rfjd7\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.340938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"auto-csr-approver-29566524-rfjd7\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.501483 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:00 crc kubenswrapper[5136]: I0320 07:24:00.969538 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:24:01 crc kubenswrapper[5136]: I0320 07:24:01.394311 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerStarted","Data":"3ebd5abb20f2da2e439c8d5e0c0df6c7761fbc80623e5e312bdfd00850a1290e"} Mar 20 07:24:02 crc kubenswrapper[5136]: I0320 07:24:02.404586 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerStarted","Data":"b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50"} Mar 20 07:24:02 crc kubenswrapper[5136]: I0320 07:24:02.426988 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" podStartSLOduration=1.327750457 podStartE2EDuration="2.426961698s" podCreationTimestamp="2026-03-20 07:24:00 +0000 UTC" firstStartedPulling="2026-03-20 07:24:00.985652115 +0000 UTC m=+2073.244963276" lastFinishedPulling="2026-03-20 07:24:02.084863336 +0000 UTC m=+2074.344174517" observedRunningTime="2026-03-20 07:24:02.415116431 +0000 UTC m=+2074.674427582" watchObservedRunningTime="2026-03-20 07:24:02.426961698 +0000 UTC m=+2074.686272879" Mar 20 07:24:03 crc kubenswrapper[5136]: I0320 07:24:03.413740 5136 generic.go:334] "Generic (PLEG): container finished" podID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerID="b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50" exitCode=0 Mar 20 07:24:03 crc kubenswrapper[5136]: I0320 07:24:03.413801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerDied","Data":"b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50"} Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.743423 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.888445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") pod \"ef36bf3c-a18a-4fe4-829e-818ee309667e\" (UID: \"ef36bf3c-a18a-4fe4-829e-818ee309667e\") " Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.894551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz" (OuterVolumeSpecName: "kube-api-access-mrzbz") pod "ef36bf3c-a18a-4fe4-829e-818ee309667e" (UID: "ef36bf3c-a18a-4fe4-829e-818ee309667e"). InnerVolumeSpecName "kube-api-access-mrzbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:24:04 crc kubenswrapper[5136]: I0320 07:24:04.990249 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mrzbz\" (UniqueName: \"kubernetes.io/projected/ef36bf3c-a18a-4fe4-829e-818ee309667e-kube-api-access-mrzbz\") on node \"crc\" DevicePath \"\"" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.396389 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:24:05 crc kubenswrapper[5136]: E0320 07:24:05.397365 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.435542 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" event={"ID":"ef36bf3c-a18a-4fe4-829e-818ee309667e","Type":"ContainerDied","Data":"3ebd5abb20f2da2e439c8d5e0c0df6c7761fbc80623e5e312bdfd00850a1290e"} Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.435798 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ebd5abb20f2da2e439c8d5e0c0df6c7761fbc80623e5e312bdfd00850a1290e" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.435642 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566524-rfjd7" Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.498141 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:24:05 crc kubenswrapper[5136]: I0320 07:24:05.503943 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566518-mjsfh"] Mar 20 07:24:06 crc kubenswrapper[5136]: I0320 07:24:06.405729 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e858127-6d5f-4dcd-828c-a6f7b892c4dc" path="/var/lib/kubelet/pods/6e858127-6d5f-4dcd-828c-a6f7b892c4dc/volumes" Mar 20 07:24:16 crc kubenswrapper[5136]: I0320 07:24:16.396647 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:24:17 crc kubenswrapper[5136]: I0320 07:24:17.540500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6"} Mar 20 07:24:33 crc kubenswrapper[5136]: I0320 07:24:33.296272 5136 scope.go:117] "RemoveContainer" containerID="7af0b7f0c5b3f60705910e6fc269402a40ca078da17abc7ff26594b5a890f02e" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.057519 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:00 crc kubenswrapper[5136]: E0320 07:25:00.058316 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerName="oc" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.058328 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerName="oc" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.058472 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" containerName="oc" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.059604 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.074798 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.118900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.118979 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.119052 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220363 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220400 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.220987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.221011 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.238442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"redhat-operators-98d26\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.400015 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.852457 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:00 crc kubenswrapper[5136]: W0320 07:25:00.858708 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod533b717e_2ea8_4f17_85b0_7520f8318f19.slice/crio-f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be WatchSource:0}: Error finding container f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be: Status 404 returned error can't find the container with id f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be Mar 20 07:25:00 crc kubenswrapper[5136]: I0320 07:25:00.924359 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerStarted","Data":"f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be"} Mar 20 07:25:01 crc kubenswrapper[5136]: I0320 07:25:01.934110 5136 generic.go:334] "Generic (PLEG): container finished" podID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" exitCode=0 Mar 20 07:25:01 crc kubenswrapper[5136]: I0320 07:25:01.934147 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6"} Mar 20 07:25:03 crc kubenswrapper[5136]: I0320 07:25:03.948116 5136 generic.go:334] "Generic (PLEG): container finished" podID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" exitCode=0 Mar 20 07:25:03 crc kubenswrapper[5136]: I0320 07:25:03.948188 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8"} Mar 20 07:25:04 crc kubenswrapper[5136]: I0320 07:25:04.957674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerStarted","Data":"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5"} Mar 20 07:25:04 crc kubenswrapper[5136]: I0320 07:25:04.979785 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-98d26" podStartSLOduration=2.533306111 podStartE2EDuration="4.979768406s" podCreationTimestamp="2026-03-20 07:25:00 +0000 UTC" firstStartedPulling="2026-03-20 07:25:01.937399628 +0000 UTC m=+2134.196710769" lastFinishedPulling="2026-03-20 07:25:04.383861883 +0000 UTC m=+2136.643173064" observedRunningTime="2026-03-20 07:25:04.972163091 +0000 UTC m=+2137.231474232" watchObservedRunningTime="2026-03-20 07:25:04.979768406 +0000 UTC m=+2137.239079557" Mar 20 07:25:10 crc kubenswrapper[5136]: I0320 07:25:10.405847 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:10 crc kubenswrapper[5136]: I0320 07:25:10.406377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:11 crc kubenswrapper[5136]: I0320 07:25:11.462725 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-98d26" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" probeResult="failure" output=< Mar 20 07:25:11 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:25:11 crc kubenswrapper[5136]: > Mar 20 07:25:20 crc kubenswrapper[5136]: I0320 07:25:20.447187 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:20 crc kubenswrapper[5136]: I0320 07:25:20.496997 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:20 crc kubenswrapper[5136]: I0320 07:25:20.695907 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.084501 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-98d26" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" containerID="cri-o://e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" gracePeriod=2 Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.523619 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.658673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") pod \"533b717e-2ea8-4f17-85b0-7520f8318f19\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.658784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") pod \"533b717e-2ea8-4f17-85b0-7520f8318f19\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.658837 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") pod \"533b717e-2ea8-4f17-85b0-7520f8318f19\" (UID: \"533b717e-2ea8-4f17-85b0-7520f8318f19\") " Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.659984 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities" (OuterVolumeSpecName: "utilities") pod "533b717e-2ea8-4f17-85b0-7520f8318f19" (UID: "533b717e-2ea8-4f17-85b0-7520f8318f19"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.673116 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9" (OuterVolumeSpecName: "kube-api-access-dzvk9") pod "533b717e-2ea8-4f17-85b0-7520f8318f19" (UID: "533b717e-2ea8-4f17-85b0-7520f8318f19"). InnerVolumeSpecName "kube-api-access-dzvk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.760858 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzvk9\" (UniqueName: \"kubernetes.io/projected/533b717e-2ea8-4f17-85b0-7520f8318f19-kube-api-access-dzvk9\") on node \"crc\" DevicePath \"\"" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.760885 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.803128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "533b717e-2ea8-4f17-85b0-7520f8318f19" (UID: "533b717e-2ea8-4f17-85b0-7520f8318f19"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:25:22 crc kubenswrapper[5136]: I0320 07:25:22.861642 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/533b717e-2ea8-4f17-85b0-7520f8318f19-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093041 5136 generic.go:334] "Generic (PLEG): container finished" podID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" exitCode=0 Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093081 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-98d26" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5"} Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093137 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-98d26" event={"ID":"533b717e-2ea8-4f17-85b0-7520f8318f19","Type":"ContainerDied","Data":"f7f9bbe52eae67fc1c5b09090364785f468eb6750d5bb39f5585fb1093ff82be"} Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.093153 5136 scope.go:117] "RemoveContainer" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.129612 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.131969 5136 scope.go:117] "RemoveContainer" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.147662 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-98d26"] Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.163493 5136 scope.go:117] "RemoveContainer" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.193130 5136 scope.go:117] "RemoveContainer" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" Mar 20 07:25:23 crc kubenswrapper[5136]: E0320 07:25:23.193835 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5\": container with ID starting with e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5 not found: ID does not exist" containerID="e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.193888 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5"} err="failed to get container status \"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5\": rpc error: code = NotFound desc = could not find container \"e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5\": container with ID starting with e4febc8a166705d764420ff0d52bb5f0a96e9daf9fc3b88b00c8f0ba42fb27f5 not found: ID does not exist" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.193922 5136 scope.go:117] "RemoveContainer" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" Mar 20 07:25:23 crc kubenswrapper[5136]: E0320 07:25:23.194435 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8\": container with ID starting with 2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8 not found: ID does not exist" containerID="2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.194474 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8"} err="failed to get container status \"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8\": rpc error: code = NotFound desc = could not find container \"2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8\": container with ID starting with 2cf60110a6562b022e4b9b78ad3d0e636367340468cdff79445f7ad5ba8521a8 not found: ID does not exist" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.194500 5136 scope.go:117] "RemoveContainer" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" Mar 20 07:25:23 crc kubenswrapper[5136]: E0320 07:25:23.195038 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6\": container with ID starting with 28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6 not found: ID does not exist" containerID="28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6" Mar 20 07:25:23 crc kubenswrapper[5136]: I0320 07:25:23.195112 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6"} err="failed to get container status \"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6\": rpc error: code = NotFound desc = could not find container \"28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6\": container with ID starting with 28013d12a4cc7598286b7a03c9f0480cbf3bdf02af84482575e90e1638cf2db6 not found: ID does not exist" Mar 20 07:25:24 crc kubenswrapper[5136]: I0320 07:25:24.406201 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" path="/var/lib/kubelet/pods/533b717e-2ea8-4f17-85b0-7520f8318f19/volumes" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.177114 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:26:00 crc kubenswrapper[5136]: E0320 07:26:00.178456 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-content" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178477 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-content" Mar 20 07:26:00 crc kubenswrapper[5136]: E0320 07:26:00.178497 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-utilities" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178510 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="extract-utilities" Mar 20 07:26:00 crc kubenswrapper[5136]: E0320 07:26:00.178543 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178557 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.178774 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="533b717e-2ea8-4f17-85b0-7520f8318f19" containerName="registry-server" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.179460 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.183520 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.183578 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.183887 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.188774 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.327349 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"auto-csr-approver-29566526-qp2cz\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.428771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"auto-csr-approver-29566526-qp2cz\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.461734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"auto-csr-approver-29566526-qp2cz\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.512855 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:00 crc kubenswrapper[5136]: I0320 07:26:00.995980 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:26:01 crc kubenswrapper[5136]: I0320 07:26:01.426222 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerStarted","Data":"194d752cbc87284ddf7365784184b26d6f665f22279e41cd0d0f5ea006c35f32"} Mar 20 07:26:03 crc kubenswrapper[5136]: I0320 07:26:03.456872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerStarted","Data":"e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4"} Mar 20 07:26:03 crc kubenswrapper[5136]: I0320 07:26:03.476923 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" podStartSLOduration=1.50593121 podStartE2EDuration="3.476902707s" podCreationTimestamp="2026-03-20 07:26:00 +0000 UTC" firstStartedPulling="2026-03-20 07:26:01.00707332 +0000 UTC m=+2193.266384511" lastFinishedPulling="2026-03-20 07:26:02.978044847 +0000 UTC m=+2195.237356008" observedRunningTime="2026-03-20 07:26:03.473541524 +0000 UTC m=+2195.732852675" watchObservedRunningTime="2026-03-20 07:26:03.476902707 +0000 UTC m=+2195.736213878" Mar 20 07:26:04 crc kubenswrapper[5136]: I0320 07:26:04.468012 5136 generic.go:334] "Generic (PLEG): container finished" podID="198ab1b0-b88b-4a70-aae0-650c78826519" containerID="e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4" exitCode=0 Mar 20 07:26:04 crc kubenswrapper[5136]: I0320 07:26:04.468051 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerDied","Data":"e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4"} Mar 20 07:26:05 crc kubenswrapper[5136]: I0320 07:26:05.833686 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.032904 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") pod \"198ab1b0-b88b-4a70-aae0-650c78826519\" (UID: \"198ab1b0-b88b-4a70-aae0-650c78826519\") " Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.041923 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx" (OuterVolumeSpecName: "kube-api-access-fxjlx") pod "198ab1b0-b88b-4a70-aae0-650c78826519" (UID: "198ab1b0-b88b-4a70-aae0-650c78826519"). InnerVolumeSpecName "kube-api-access-fxjlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.134834 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxjlx\" (UniqueName: \"kubernetes.io/projected/198ab1b0-b88b-4a70-aae0-650c78826519-kube-api-access-fxjlx\") on node \"crc\" DevicePath \"\"" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.484708 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" event={"ID":"198ab1b0-b88b-4a70-aae0-650c78826519","Type":"ContainerDied","Data":"194d752cbc87284ddf7365784184b26d6f665f22279e41cd0d0f5ea006c35f32"} Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.484752 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566526-qp2cz" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.484778 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="194d752cbc87284ddf7365784184b26d6f665f22279e41cd0d0f5ea006c35f32" Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.538524 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:26:06 crc kubenswrapper[5136]: I0320 07:26:06.546673 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566520-gp87b"] Mar 20 07:26:08 crc kubenswrapper[5136]: I0320 07:26:08.406789 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1114e255-4c25-4a30-88fb-4393c90a6d27" path="/var/lib/kubelet/pods/1114e255-4c25-4a30-88fb-4393c90a6d27/volumes" Mar 20 07:26:33 crc kubenswrapper[5136]: I0320 07:26:33.397132 5136 scope.go:117] "RemoveContainer" containerID="cbc3d1a89274343d759e3c647c542017f95e292a6f7b2eb7b7c31cedebd75f6f" Mar 20 07:26:45 crc kubenswrapper[5136]: I0320 07:26:45.821881 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:26:45 crc kubenswrapper[5136]: I0320 07:26:45.822646 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.860923 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:03 crc kubenswrapper[5136]: E0320 07:27:03.862025 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" containerName="oc" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.862051 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" containerName="oc" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.862311 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" containerName="oc" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.864139 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:03 crc kubenswrapper[5136]: I0320 07:27:03.879096 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.001531 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.001611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.001684 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103213 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103260 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103292 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103732 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.103878 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.121624 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"redhat-marketplace-6qtd9\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.199079 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.648702 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.987713 5136 generic.go:334] "Generic (PLEG): container finished" podID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" exitCode=0 Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.987751 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d"} Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.987778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerStarted","Data":"8c8a7c0d79ff4de02e1fafa971deb807b56f56fea37e588456a8b2dd66558e0d"} Mar 20 07:27:04 crc kubenswrapper[5136]: I0320 07:27:04.989899 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:27:05 crc kubenswrapper[5136]: I0320 07:27:05.998195 5136 generic.go:334] "Generic (PLEG): container finished" podID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" exitCode=0 Mar 20 07:27:05 crc kubenswrapper[5136]: I0320 07:27:05.998478 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f"} Mar 20 07:27:07 crc kubenswrapper[5136]: I0320 07:27:07.006574 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerStarted","Data":"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c"} Mar 20 07:27:07 crc kubenswrapper[5136]: I0320 07:27:07.026148 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6qtd9" podStartSLOduration=2.401815712 podStartE2EDuration="4.026123127s" podCreationTimestamp="2026-03-20 07:27:03 +0000 UTC" firstStartedPulling="2026-03-20 07:27:04.989692073 +0000 UTC m=+2257.249003224" lastFinishedPulling="2026-03-20 07:27:06.613999488 +0000 UTC m=+2258.873310639" observedRunningTime="2026-03-20 07:27:07.024715823 +0000 UTC m=+2259.284027054" watchObservedRunningTime="2026-03-20 07:27:07.026123127 +0000 UTC m=+2259.285434318" Mar 20 07:27:14 crc kubenswrapper[5136]: I0320 07:27:14.200206 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:14 crc kubenswrapper[5136]: I0320 07:27:14.200724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:14 crc kubenswrapper[5136]: I0320 07:27:14.249565 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.106447 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.184042 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.822267 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:27:15 crc kubenswrapper[5136]: I0320 07:27:15.822352 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.079789 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6qtd9" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" containerID="cri-o://c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" gracePeriod=2 Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.615970 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.696244 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") pod \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.696296 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") pod \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.696372 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") pod \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\" (UID: \"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7\") " Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.697178 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities" (OuterVolumeSpecName: "utilities") pod "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" (UID: "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.701798 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv" (OuterVolumeSpecName: "kube-api-access-jv8nv") pod "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" (UID: "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7"). InnerVolumeSpecName "kube-api-access-jv8nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.742871 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" (UID: "5ecae231-ad48-4b41-ac45-5d2b0bbe46e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.798356 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.798398 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jv8nv\" (UniqueName: \"kubernetes.io/projected/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-kube-api-access-jv8nv\") on node \"crc\" DevicePath \"\"" Mar 20 07:27:17 crc kubenswrapper[5136]: I0320 07:27:17.798417 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089385 5136 generic.go:334] "Generic (PLEG): container finished" podID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" exitCode=0 Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089456 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c"} Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089492 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6qtd9" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089504 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6qtd9" event={"ID":"5ecae231-ad48-4b41-ac45-5d2b0bbe46e7","Type":"ContainerDied","Data":"8c8a7c0d79ff4de02e1fafa971deb807b56f56fea37e588456a8b2dd66558e0d"} Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.089539 5136 scope.go:117] "RemoveContainer" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.120069 5136 scope.go:117] "RemoveContainer" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.135253 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.141487 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6qtd9"] Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.149880 5136 scope.go:117] "RemoveContainer" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.181404 5136 scope.go:117] "RemoveContainer" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" Mar 20 07:27:18 crc kubenswrapper[5136]: E0320 07:27:18.181967 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c\": container with ID starting with c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c not found: ID does not exist" containerID="c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182008 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c"} err="failed to get container status \"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c\": rpc error: code = NotFound desc = could not find container \"c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c\": container with ID starting with c49e5adfaf783e06bad0415f74f43b3b6fe84504a805313f8f3100988eef7d9c not found: ID does not exist" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182050 5136 scope.go:117] "RemoveContainer" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" Mar 20 07:27:18 crc kubenswrapper[5136]: E0320 07:27:18.182314 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f\": container with ID starting with 3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f not found: ID does not exist" containerID="3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182351 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f"} err="failed to get container status \"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f\": rpc error: code = NotFound desc = could not find container \"3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f\": container with ID starting with 3502a6b0861a74870f56ab3b29d981e599f698301fc14dec5980f4ae99d0b01f not found: ID does not exist" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182377 5136 scope.go:117] "RemoveContainer" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" Mar 20 07:27:18 crc kubenswrapper[5136]: E0320 07:27:18.182725 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d\": container with ID starting with 2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d not found: ID does not exist" containerID="2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.182745 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d"} err="failed to get container status \"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d\": rpc error: code = NotFound desc = could not find container \"2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d\": container with ID starting with 2fdce7ecbb81964ed5aca79d62fe826d408a93678e905eb1a94ad0f8d9ffd41d not found: ID does not exist" Mar 20 07:27:18 crc kubenswrapper[5136]: I0320 07:27:18.407861 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" path="/var/lib/kubelet/pods/5ecae231-ad48-4b41-ac45-5d2b0bbe46e7/volumes" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.822674 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.823473 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.823549 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.824428 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:27:45 crc kubenswrapper[5136]: I0320 07:27:45.824512 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6" gracePeriod=600 Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.328788 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6" exitCode=0 Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.328939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6"} Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.329087 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365"} Mar 20 07:27:46 crc kubenswrapper[5136]: I0320 07:27:46.329142 5136 scope.go:117] "RemoveContainer" containerID="4b1473d6fd84f1316ee1abc88e129900360a010045619e086bb5c7701169e59d" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.161665 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:28:00 crc kubenswrapper[5136]: E0320 07:28:00.162921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-content" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.162935 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-content" Mar 20 07:28:00 crc kubenswrapper[5136]: E0320 07:28:00.162965 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-utilities" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.162972 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="extract-utilities" Mar 20 07:28:00 crc kubenswrapper[5136]: E0320 07:28:00.162990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.162996 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.163136 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ecae231-ad48-4b41-ac45-5d2b0bbe46e7" containerName="registry-server" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.163660 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.171483 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.171571 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.173250 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.191078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.314534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"auto-csr-approver-29566528-p9gfh\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.416688 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"auto-csr-approver-29566528-p9gfh\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.443461 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"auto-csr-approver-29566528-p9gfh\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.494854 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:00 crc kubenswrapper[5136]: I0320 07:28:00.963860 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:28:01 crc kubenswrapper[5136]: I0320 07:28:01.472139 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" event={"ID":"0663cd7c-704c-4495-8271-f55538649003","Type":"ContainerStarted","Data":"fe6d42779a583ae94fdb31de8372f662e91aa9a0f7aa882296d25fd0dc014236"} Mar 20 07:28:04 crc kubenswrapper[5136]: I0320 07:28:04.493395 5136 generic.go:334] "Generic (PLEG): container finished" podID="0663cd7c-704c-4495-8271-f55538649003" containerID="8e7514aba4ea3d84ec9496fc84994ced79208352205e65a08cbf2bd32660e7b5" exitCode=0 Mar 20 07:28:04 crc kubenswrapper[5136]: I0320 07:28:04.493465 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" event={"ID":"0663cd7c-704c-4495-8271-f55538649003","Type":"ContainerDied","Data":"8e7514aba4ea3d84ec9496fc84994ced79208352205e65a08cbf2bd32660e7b5"} Mar 20 07:28:05 crc kubenswrapper[5136]: I0320 07:28:05.802763 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.000169 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") pod \"0663cd7c-704c-4495-8271-f55538649003\" (UID: \"0663cd7c-704c-4495-8271-f55538649003\") " Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.008271 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj" (OuterVolumeSpecName: "kube-api-access-gjjjj") pod "0663cd7c-704c-4495-8271-f55538649003" (UID: "0663cd7c-704c-4495-8271-f55538649003"). InnerVolumeSpecName "kube-api-access-gjjjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.102317 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjjjj\" (UniqueName: \"kubernetes.io/projected/0663cd7c-704c-4495-8271-f55538649003-kube-api-access-gjjjj\") on node \"crc\" DevicePath \"\"" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.511062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" event={"ID":"0663cd7c-704c-4495-8271-f55538649003","Type":"ContainerDied","Data":"fe6d42779a583ae94fdb31de8372f662e91aa9a0f7aa882296d25fd0dc014236"} Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.511107 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe6d42779a583ae94fdb31de8372f662e91aa9a0f7aa882296d25fd0dc014236" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.511171 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566528-p9gfh" Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.882698 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:28:06 crc kubenswrapper[5136]: I0320 07:28:06.888935 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566522-6qksf"] Mar 20 07:28:08 crc kubenswrapper[5136]: I0320 07:28:08.408449 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89e4d1fb-8e51-468f-877b-49847c583d53" path="/var/lib/kubelet/pods/89e4d1fb-8e51-468f-877b-49847c583d53/volumes" Mar 20 07:28:33 crc kubenswrapper[5136]: I0320 07:28:33.528499 5136 scope.go:117] "RemoveContainer" containerID="18ba546b64b0c89c0f6afc33df4ae25c09a3d7098c15cfd5a2ea63fa1ebde19e" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.146327 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:30:00 crc kubenswrapper[5136]: E0320 07:30:00.147370 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0663cd7c-704c-4495-8271-f55538649003" containerName="oc" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.147395 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0663cd7c-704c-4495-8271-f55538649003" containerName="oc" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.147678 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0663cd7c-704c-4495-8271-f55538649003" containerName="oc" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.148367 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.152838 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.153098 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.153333 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.161754 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.176020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"auto-csr-approver-29566530-wht58\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.191697 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.193134 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.196783 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.196838 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.205078 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277234 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277273 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.277296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"auto-csr-approver-29566530-wht58\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.300950 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"auto-csr-approver-29566530-wht58\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.379196 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.379696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.379864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.380491 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.384570 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.403844 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"collect-profiles-29566530-b9fhz\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.479663 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.527246 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.799478 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 07:30:00 crc kubenswrapper[5136]: I0320 07:30:00.932687 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.433904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerStarted","Data":"8f725118d8b349ca7d53012e9026199e683ebf9bcdef4c5634f52e153f090d81"} Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.435689 5136 generic.go:334] "Generic (PLEG): container finished" podID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerID="ca492a12ee4dfef81804d9a43645add86ef8ab0ce16812e4c74a09d17ae0ea3c" exitCode=0 Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.435735 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" event={"ID":"d251ba65-cac2-4d94-b882-672d97a85bc7","Type":"ContainerDied","Data":"ca492a12ee4dfef81804d9a43645add86ef8ab0ce16812e4c74a09d17ae0ea3c"} Mar 20 07:30:01 crc kubenswrapper[5136]: I0320 07:30:01.435797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" event={"ID":"d251ba65-cac2-4d94-b882-672d97a85bc7","Type":"ContainerStarted","Data":"08250740f94fe396ced2c2691bc1e32d26db60f0f4e97fbd3f3257c9f6f4333a"} Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.753432 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814103 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") pod \"d251ba65-cac2-4d94-b882-672d97a85bc7\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") pod \"d251ba65-cac2-4d94-b882-672d97a85bc7\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814234 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") pod \"d251ba65-cac2-4d94-b882-672d97a85bc7\" (UID: \"d251ba65-cac2-4d94-b882-672d97a85bc7\") " Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.814926 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume" (OuterVolumeSpecName: "config-volume") pod "d251ba65-cac2-4d94-b882-672d97a85bc7" (UID: "d251ba65-cac2-4d94-b882-672d97a85bc7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.818748 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz" (OuterVolumeSpecName: "kube-api-access-xg4kz") pod "d251ba65-cac2-4d94-b882-672d97a85bc7" (UID: "d251ba65-cac2-4d94-b882-672d97a85bc7"). InnerVolumeSpecName "kube-api-access-xg4kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.820449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d251ba65-cac2-4d94-b882-672d97a85bc7" (UID: "d251ba65-cac2-4d94-b882-672d97a85bc7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.916192 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d251ba65-cac2-4d94-b882-672d97a85bc7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.916477 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d251ba65-cac2-4d94-b882-672d97a85bc7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:02 crc kubenswrapper[5136]: I0320 07:30:02.916487 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg4kz\" (UniqueName: \"kubernetes.io/projected/d251ba65-cac2-4d94-b882-672d97a85bc7-kube-api-access-xg4kz\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.452605 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.452692 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz" event={"ID":"d251ba65-cac2-4d94-b882-672d97a85bc7","Type":"ContainerDied","Data":"08250740f94fe396ced2c2691bc1e32d26db60f0f4e97fbd3f3257c9f6f4333a"} Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.452747 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08250740f94fe396ced2c2691bc1e32d26db60f0f4e97fbd3f3257c9f6f4333a" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.454420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerStarted","Data":"a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265"} Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.478774 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566530-wht58" podStartSLOduration=1.398690712 podStartE2EDuration="3.478744629s" podCreationTimestamp="2026-03-20 07:30:00 +0000 UTC" firstStartedPulling="2026-03-20 07:30:00.939598572 +0000 UTC m=+2433.198909733" lastFinishedPulling="2026-03-20 07:30:03.019652489 +0000 UTC m=+2435.278963650" observedRunningTime="2026-03-20 07:30:03.470434703 +0000 UTC m=+2435.729745894" watchObservedRunningTime="2026-03-20 07:30:03.478744629 +0000 UTC m=+2435.738055820" Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.845898 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 07:30:03 crc kubenswrapper[5136]: I0320 07:30:03.852799 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566485-n6252"] Mar 20 07:30:04 crc kubenswrapper[5136]: I0320 07:30:04.409693 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8" path="/var/lib/kubelet/pods/d7494f78-bf5b-4a7a-a7b8-9fdaf44217b8/volumes" Mar 20 07:30:04 crc kubenswrapper[5136]: I0320 07:30:04.486238 5136 generic.go:334] "Generic (PLEG): container finished" podID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerID="a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265" exitCode=0 Mar 20 07:30:04 crc kubenswrapper[5136]: I0320 07:30:04.486327 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerDied","Data":"a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265"} Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.815327 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.885665 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") pod \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\" (UID: \"2d0faa53-8471-40c0-a2ed-ef66d5b66e72\") " Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.893409 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s" (OuterVolumeSpecName: "kube-api-access-cq77s") pod "2d0faa53-8471-40c0-a2ed-ef66d5b66e72" (UID: "2d0faa53-8471-40c0-a2ed-ef66d5b66e72"). InnerVolumeSpecName "kube-api-access-cq77s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:30:05 crc kubenswrapper[5136]: I0320 07:30:05.988029 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cq77s\" (UniqueName: \"kubernetes.io/projected/2d0faa53-8471-40c0-a2ed-ef66d5b66e72-kube-api-access-cq77s\") on node \"crc\" DevicePath \"\"" Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.506323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566530-wht58" event={"ID":"2d0faa53-8471-40c0-a2ed-ef66d5b66e72","Type":"ContainerDied","Data":"8f725118d8b349ca7d53012e9026199e683ebf9bcdef4c5634f52e153f090d81"} Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.506405 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f725118d8b349ca7d53012e9026199e683ebf9bcdef4c5634f52e153f090d81" Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.506526 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566530-wht58" Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.535842 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:30:06 crc kubenswrapper[5136]: I0320 07:30:06.541676 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566524-rfjd7"] Mar 20 07:30:08 crc kubenswrapper[5136]: I0320 07:30:08.413337 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef36bf3c-a18a-4fe4-829e-818ee309667e" path="/var/lib/kubelet/pods/ef36bf3c-a18a-4fe4-829e-818ee309667e/volumes" Mar 20 07:30:15 crc kubenswrapper[5136]: I0320 07:30:15.822623 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:30:15 crc kubenswrapper[5136]: I0320 07:30:15.822966 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:30:33 crc kubenswrapper[5136]: I0320 07:30:33.630247 5136 scope.go:117] "RemoveContainer" containerID="b0a37f409a618d7e4e5c53b904c0bfbe0cfe72b34fedd22d7f0f23af1ad97d50" Mar 20 07:30:33 crc kubenswrapper[5136]: I0320 07:30:33.667887 5136 scope.go:117] "RemoveContainer" containerID="f960ca2f1291c6810939c91cb385274efd1428e9971de1fcce80d392e52b2a36" Mar 20 07:30:45 crc kubenswrapper[5136]: I0320 07:30:45.822739 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:30:45 crc kubenswrapper[5136]: I0320 07:30:45.823406 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.821917 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.822606 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.822659 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.823372 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:31:15 crc kubenswrapper[5136]: I0320 07:31:15.823441 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" gracePeriod=600 Mar 20 07:31:15 crc kubenswrapper[5136]: E0320 07:31:15.949980 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.121478 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" exitCode=0 Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.121528 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365"} Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.121566 5136 scope.go:117] "RemoveContainer" containerID="f5574ea7c501f0833443d13c0482979038eb9dd402ea30f057e61e453a0be9c6" Mar 20 07:31:16 crc kubenswrapper[5136]: I0320 07:31:16.122249 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:16 crc kubenswrapper[5136]: E0320 07:31:16.122851 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:26 crc kubenswrapper[5136]: I0320 07:31:26.396768 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:26 crc kubenswrapper[5136]: E0320 07:31:26.397561 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:41 crc kubenswrapper[5136]: I0320 07:31:41.396369 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:41 crc kubenswrapper[5136]: E0320 07:31:41.397248 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:52 crc kubenswrapper[5136]: I0320 07:31:52.396701 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:31:52 crc kubenswrapper[5136]: E0320 07:31:52.397456 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.587183 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:31:59 crc kubenswrapper[5136]: E0320 07:31:59.588395 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerName="collect-profiles" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588426 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerName="collect-profiles" Mar 20 07:31:59 crc kubenswrapper[5136]: E0320 07:31:59.588472 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerName="oc" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588493 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerName="oc" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588764 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" containerName="oc" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.588808 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" containerName="collect-profiles" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.590627 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.597215 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.666050 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.666104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.666287 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767015 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767655 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.767717 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.796435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"certified-operators-tk65m\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:31:59 crc kubenswrapper[5136]: I0320 07:31:59.925773 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.139713 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.146094 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.151184 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.151253 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.151451 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.156168 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.274772 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"auto-csr-approver-29566532-twtd9\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.375689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"auto-csr-approver-29566532-twtd9\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.378730 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.400628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"auto-csr-approver-29566532-twtd9\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.466490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerStarted","Data":"f1db3e0f434391de48b8a5e136738f7ddfe0ad77069b404f19c69058e7dcef08"} Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.475955 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:00 crc kubenswrapper[5136]: I0320 07:32:00.675758 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:32:00 crc kubenswrapper[5136]: W0320 07:32:00.676443 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4c27187_55d8_4db4_9cae_d77617300a14.slice/crio-81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583 WatchSource:0}: Error finding container 81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583: Status 404 returned error can't find the container with id 81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583 Mar 20 07:32:01 crc kubenswrapper[5136]: I0320 07:32:01.478718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566532-twtd9" event={"ID":"e4c27187-55d8-4db4-9cae-d77617300a14","Type":"ContainerStarted","Data":"81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583"} Mar 20 07:32:01 crc kubenswrapper[5136]: I0320 07:32:01.480747 5136 generic.go:334] "Generic (PLEG): container finished" podID="e6f69975-a243-4554-8864-968b28f34bb1" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" exitCode=0 Mar 20 07:32:01 crc kubenswrapper[5136]: I0320 07:32:01.480871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789"} Mar 20 07:32:02 crc kubenswrapper[5136]: I0320 07:32:02.488493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerStarted","Data":"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363"} Mar 20 07:32:02 crc kubenswrapper[5136]: I0320 07:32:02.490106 5136 generic.go:334] "Generic (PLEG): container finished" podID="e4c27187-55d8-4db4-9cae-d77617300a14" containerID="5c9dbddef3617b2a1a9f29b6615bed2e74b730bf03006a402fac0528653fa989" exitCode=0 Mar 20 07:32:02 crc kubenswrapper[5136]: I0320 07:32:02.490130 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566532-twtd9" event={"ID":"e4c27187-55d8-4db4-9cae-d77617300a14","Type":"ContainerDied","Data":"5c9dbddef3617b2a1a9f29b6615bed2e74b730bf03006a402fac0528653fa989"} Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.398038 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:03 crc kubenswrapper[5136]: E0320 07:32:03.398563 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.503996 5136 generic.go:334] "Generic (PLEG): container finished" podID="e6f69975-a243-4554-8864-968b28f34bb1" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" exitCode=0 Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.504084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363"} Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.880062 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.932152 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") pod \"e4c27187-55d8-4db4-9cae-d77617300a14\" (UID: \"e4c27187-55d8-4db4-9cae-d77617300a14\") " Mar 20 07:32:03 crc kubenswrapper[5136]: I0320 07:32:03.939084 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx" (OuterVolumeSpecName: "kube-api-access-xzrxx") pod "e4c27187-55d8-4db4-9cae-d77617300a14" (UID: "e4c27187-55d8-4db4-9cae-d77617300a14"). InnerVolumeSpecName "kube-api-access-xzrxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.033522 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzrxx\" (UniqueName: \"kubernetes.io/projected/e4c27187-55d8-4db4-9cae-d77617300a14-kube-api-access-xzrxx\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.514159 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566532-twtd9" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.514152 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566532-twtd9" event={"ID":"e4c27187-55d8-4db4-9cae-d77617300a14","Type":"ContainerDied","Data":"81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583"} Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.514614 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81591355317b13a0a8698f58721c71f74f88a3e6f5808d8684f5146e93341583" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.516905 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerStarted","Data":"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e"} Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.548598 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tk65m" podStartSLOduration=2.833155654 podStartE2EDuration="5.548574426s" podCreationTimestamp="2026-03-20 07:31:59 +0000 UTC" firstStartedPulling="2026-03-20 07:32:01.482864103 +0000 UTC m=+2553.742175254" lastFinishedPulling="2026-03-20 07:32:04.198282835 +0000 UTC m=+2556.457594026" observedRunningTime="2026-03-20 07:32:04.539743385 +0000 UTC m=+2556.799054566" watchObservedRunningTime="2026-03-20 07:32:04.548574426 +0000 UTC m=+2556.807885607" Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.955322 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:32:04 crc kubenswrapper[5136]: I0320 07:32:04.966255 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566526-qp2cz"] Mar 20 07:32:06 crc kubenswrapper[5136]: I0320 07:32:06.407283 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="198ab1b0-b88b-4a70-aae0-650c78826519" path="/var/lib/kubelet/pods/198ab1b0-b88b-4a70-aae0-650c78826519/volumes" Mar 20 07:32:09 crc kubenswrapper[5136]: I0320 07:32:09.925955 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:09 crc kubenswrapper[5136]: I0320 07:32:09.926714 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:09 crc kubenswrapper[5136]: I0320 07:32:09.973612 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:10 crc kubenswrapper[5136]: I0320 07:32:10.603134 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:10 crc kubenswrapper[5136]: I0320 07:32:10.643108 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:12 crc kubenswrapper[5136]: I0320 07:32:12.583453 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tk65m" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" containerID="cri-o://2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" gracePeriod=2 Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.542504 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593232 5136 generic.go:334] "Generic (PLEG): container finished" podID="e6f69975-a243-4554-8864-968b28f34bb1" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" exitCode=0 Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593277 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e"} Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593289 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tk65m" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593311 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tk65m" event={"ID":"e6f69975-a243-4554-8864-968b28f34bb1","Type":"ContainerDied","Data":"f1db3e0f434391de48b8a5e136738f7ddfe0ad77069b404f19c69058e7dcef08"} Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.593331 5136 scope.go:117] "RemoveContainer" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.617434 5136 scope.go:117] "RemoveContainer" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.618773 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") pod \"e6f69975-a243-4554-8864-968b28f34bb1\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.618855 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") pod \"e6f69975-a243-4554-8864-968b28f34bb1\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.618949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") pod \"e6f69975-a243-4554-8864-968b28f34bb1\" (UID: \"e6f69975-a243-4554-8864-968b28f34bb1\") " Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.620225 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities" (OuterVolumeSpecName: "utilities") pod "e6f69975-a243-4554-8864-968b28f34bb1" (UID: "e6f69975-a243-4554-8864-968b28f34bb1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.624034 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp" (OuterVolumeSpecName: "kube-api-access-dtvbp") pod "e6f69975-a243-4554-8864-968b28f34bb1" (UID: "e6f69975-a243-4554-8864-968b28f34bb1"). InnerVolumeSpecName "kube-api-access-dtvbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.634537 5136 scope.go:117] "RemoveContainer" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.673435 5136 scope.go:117] "RemoveContainer" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" Mar 20 07:32:13 crc kubenswrapper[5136]: E0320 07:32:13.676475 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e\": container with ID starting with 2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e not found: ID does not exist" containerID="2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676560 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e"} err="failed to get container status \"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e\": rpc error: code = NotFound desc = could not find container \"2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e\": container with ID starting with 2ca4467384620e0615f42bffc25d234a2c1a9e6f9cc2ce3eeaa59e97f68f1e3e not found: ID does not exist" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676594 5136 scope.go:117] "RemoveContainer" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" Mar 20 07:32:13 crc kubenswrapper[5136]: E0320 07:32:13.676909 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363\": container with ID starting with e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363 not found: ID does not exist" containerID="e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676952 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363"} err="failed to get container status \"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363\": rpc error: code = NotFound desc = could not find container \"e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363\": container with ID starting with e4b067f3296b4ad3c880f59931da967d610017a5079df52dd4fd9c70e9767363 not found: ID does not exist" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.676977 5136 scope.go:117] "RemoveContainer" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" Mar 20 07:32:13 crc kubenswrapper[5136]: E0320 07:32:13.677265 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789\": container with ID starting with 605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789 not found: ID does not exist" containerID="605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.677299 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789"} err="failed to get container status \"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789\": rpc error: code = NotFound desc = could not find container \"605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789\": container with ID starting with 605fab57bde4d90aa04074584e1ee7a3eddccb5c298651670ef61f0de3c62789 not found: ID does not exist" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.682225 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e6f69975-a243-4554-8864-968b28f34bb1" (UID: "e6f69975-a243-4554-8864-968b28f34bb1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.720072 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtvbp\" (UniqueName: \"kubernetes.io/projected/e6f69975-a243-4554-8864-968b28f34bb1-kube-api-access-dtvbp\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.720104 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.720116 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e6f69975-a243-4554-8864-968b28f34bb1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.938170 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:13 crc kubenswrapper[5136]: I0320 07:32:13.944005 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tk65m"] Mar 20 07:32:14 crc kubenswrapper[5136]: I0320 07:32:14.407104 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6f69975-a243-4554-8864-968b28f34bb1" path="/var/lib/kubelet/pods/e6f69975-a243-4554-8864-968b28f34bb1/volumes" Mar 20 07:32:18 crc kubenswrapper[5136]: I0320 07:32:18.407081 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:18 crc kubenswrapper[5136]: E0320 07:32:18.407748 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:32 crc kubenswrapper[5136]: I0320 07:32:32.396671 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:32 crc kubenswrapper[5136]: E0320 07:32:32.397463 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:33 crc kubenswrapper[5136]: I0320 07:32:33.739399 5136 scope.go:117] "RemoveContainer" containerID="e9b25b5fd15a592a38a8ef02f8f0787d7aae0d22dff9f222f8354b7975e0efb4" Mar 20 07:32:44 crc kubenswrapper[5136]: I0320 07:32:44.399055 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:44 crc kubenswrapper[5136]: E0320 07:32:44.400538 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751083 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751827 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751841 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751855 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-content" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751863 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-content" Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751876 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" containerName="oc" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751884 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" containerName="oc" Mar 20 07:32:52 crc kubenswrapper[5136]: E0320 07:32:52.751907 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-utilities" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.751915 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="extract-utilities" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.752068 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" containerName="oc" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.752090 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6f69975-a243-4554-8864-968b28f34bb1" containerName="registry-server" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.753205 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.762736 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.808284 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.808355 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.808407 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.909918 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.909995 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.910056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.910588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.910632 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:52 crc kubenswrapper[5136]: I0320 07:32:52.932158 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"community-operators-mhq7q\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.073515 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.560096 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.923526 5136 generic.go:334] "Generic (PLEG): container finished" podID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" exitCode=0 Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.923598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2"} Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.923639 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerStarted","Data":"f06afaedabec7e65e7e127cf6a160feaa8bfb209c8a10ddd6399781dab8be5fa"} Mar 20 07:32:53 crc kubenswrapper[5136]: I0320 07:32:53.925316 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:32:54 crc kubenswrapper[5136]: I0320 07:32:54.931658 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerStarted","Data":"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618"} Mar 20 07:32:55 crc kubenswrapper[5136]: I0320 07:32:55.397173 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:32:55 crc kubenswrapper[5136]: E0320 07:32:55.397529 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:32:55 crc kubenswrapper[5136]: I0320 07:32:55.939594 5136 generic.go:334] "Generic (PLEG): container finished" podID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" exitCode=0 Mar 20 07:32:55 crc kubenswrapper[5136]: I0320 07:32:55.939650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618"} Mar 20 07:32:56 crc kubenswrapper[5136]: I0320 07:32:56.948623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerStarted","Data":"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584"} Mar 20 07:32:56 crc kubenswrapper[5136]: I0320 07:32:56.967944 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mhq7q" podStartSLOduration=2.5713867969999997 podStartE2EDuration="4.967928317s" podCreationTimestamp="2026-03-20 07:32:52 +0000 UTC" firstStartedPulling="2026-03-20 07:32:53.924676593 +0000 UTC m=+2606.183987784" lastFinishedPulling="2026-03-20 07:32:56.321218103 +0000 UTC m=+2608.580529304" observedRunningTime="2026-03-20 07:32:56.967225557 +0000 UTC m=+2609.226536708" watchObservedRunningTime="2026-03-20 07:32:56.967928317 +0000 UTC m=+2609.227239468" Mar 20 07:33:03 crc kubenswrapper[5136]: I0320 07:33:03.074329 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:03 crc kubenswrapper[5136]: I0320 07:33:03.074981 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:03 crc kubenswrapper[5136]: I0320 07:33:03.155299 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:04 crc kubenswrapper[5136]: I0320 07:33:04.079467 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:04 crc kubenswrapper[5136]: I0320 07:33:04.149727 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.023201 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mhq7q" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" containerID="cri-o://ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" gracePeriod=2 Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.468837 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.535634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") pod \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.535889 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") pod \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.536164 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") pod \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\" (UID: \"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9\") " Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.537193 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities" (OuterVolumeSpecName: "utilities") pod "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" (UID: "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.541330 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c" (OuterVolumeSpecName: "kube-api-access-8499c") pod "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" (UID: "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9"). InnerVolumeSpecName "kube-api-access-8499c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.602732 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" (UID: "55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.638893 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8499c\" (UniqueName: \"kubernetes.io/projected/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-kube-api-access-8499c\") on node \"crc\" DevicePath \"\"" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.639206 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:33:06 crc kubenswrapper[5136]: I0320 07:33:06.639359 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.033952 5136 generic.go:334] "Generic (PLEG): container finished" podID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" exitCode=0 Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.034028 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mhq7q" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.034022 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584"} Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.035115 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mhq7q" event={"ID":"55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9","Type":"ContainerDied","Data":"f06afaedabec7e65e7e127cf6a160feaa8bfb209c8a10ddd6399781dab8be5fa"} Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.035186 5136 scope.go:117] "RemoveContainer" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.061537 5136 scope.go:117] "RemoveContainer" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.092762 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.102620 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mhq7q"] Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.108504 5136 scope.go:117] "RemoveContainer" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.138665 5136 scope.go:117] "RemoveContainer" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" Mar 20 07:33:07 crc kubenswrapper[5136]: E0320 07:33:07.139337 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584\": container with ID starting with ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584 not found: ID does not exist" containerID="ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.139476 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584"} err="failed to get container status \"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584\": rpc error: code = NotFound desc = could not find container \"ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584\": container with ID starting with ad2fcfc2976d7d3b3786e96ea767b4b5b428ac80cfbd2beeffdf6c9b002f5584 not found: ID does not exist" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.139658 5136 scope.go:117] "RemoveContainer" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" Mar 20 07:33:07 crc kubenswrapper[5136]: E0320 07:33:07.140089 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618\": container with ID starting with 67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618 not found: ID does not exist" containerID="67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.140117 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618"} err="failed to get container status \"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618\": rpc error: code = NotFound desc = could not find container \"67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618\": container with ID starting with 67f6e944fe0c99fb6f4f72159d059398308ceb6ec0b890c134a242b1d9825618 not found: ID does not exist" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.140135 5136 scope.go:117] "RemoveContainer" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" Mar 20 07:33:07 crc kubenswrapper[5136]: E0320 07:33:07.140474 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2\": container with ID starting with 3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2 not found: ID does not exist" containerID="3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2" Mar 20 07:33:07 crc kubenswrapper[5136]: I0320 07:33:07.140517 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2"} err="failed to get container status \"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2\": rpc error: code = NotFound desc = could not find container \"3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2\": container with ID starting with 3283c1fc7807e9df4491b391766cf37db017729931f2cac04d7bfc68083099a2 not found: ID does not exist" Mar 20 07:33:08 crc kubenswrapper[5136]: I0320 07:33:08.410054 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" path="/var/lib/kubelet/pods/55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9/volumes" Mar 20 07:33:09 crc kubenswrapper[5136]: I0320 07:33:09.396517 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:09 crc kubenswrapper[5136]: E0320 07:33:09.396886 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:33:21 crc kubenswrapper[5136]: I0320 07:33:21.396693 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:21 crc kubenswrapper[5136]: E0320 07:33:21.397920 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:33:32 crc kubenswrapper[5136]: I0320 07:33:32.397057 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:32 crc kubenswrapper[5136]: E0320 07:33:32.397840 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:33:46 crc kubenswrapper[5136]: I0320 07:33:46.397655 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:33:46 crc kubenswrapper[5136]: E0320 07:33:46.398363 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.169740 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:34:00 crc kubenswrapper[5136]: E0320 07:34:00.174703 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-content" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.174745 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-content" Mar 20 07:34:00 crc kubenswrapper[5136]: E0320 07:34:00.174775 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-utilities" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.174792 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="extract-utilities" Mar 20 07:34:00 crc kubenswrapper[5136]: E0320 07:34:00.174874 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.174895 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.175219 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="55cddc0f-8d5f-4bbd-a52a-daa2c5d57ae9" containerName="registry-server" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.176084 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.182523 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.182858 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.189112 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.204624 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.284917 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"auto-csr-approver-29566534-mfjdt\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.386020 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"auto-csr-approver-29566534-mfjdt\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.407850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"auto-csr-approver-29566534-mfjdt\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.511370 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:00 crc kubenswrapper[5136]: I0320 07:34:00.995361 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:34:01 crc kubenswrapper[5136]: W0320 07:34:01.002228 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode5d24cb7_5c49_44f0_b18f_a09604ee8bb6.slice/crio-1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b WatchSource:0}: Error finding container 1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b: Status 404 returned error can't find the container with id 1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b Mar 20 07:34:01 crc kubenswrapper[5136]: I0320 07:34:01.397386 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:01 crc kubenswrapper[5136]: E0320 07:34:01.397800 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:01 crc kubenswrapper[5136]: I0320 07:34:01.459652 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" event={"ID":"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6","Type":"ContainerStarted","Data":"1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b"} Mar 20 07:34:02 crc kubenswrapper[5136]: I0320 07:34:02.468803 5136 generic.go:334] "Generic (PLEG): container finished" podID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerID="8642a4da6cd6ed0a2fb9cab568877c9f7b33cdf51b38940824b385ba7da2a860" exitCode=0 Mar 20 07:34:02 crc kubenswrapper[5136]: I0320 07:34:02.468938 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" event={"ID":"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6","Type":"ContainerDied","Data":"8642a4da6cd6ed0a2fb9cab568877c9f7b33cdf51b38940824b385ba7da2a860"} Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.754518 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.832128 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") pod \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\" (UID: \"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6\") " Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.838310 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7" (OuterVolumeSpecName: "kube-api-access-hjjd7") pod "e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" (UID: "e5d24cb7-5c49-44f0-b18f-a09604ee8bb6"). InnerVolumeSpecName "kube-api-access-hjjd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:34:03 crc kubenswrapper[5136]: I0320 07:34:03.934303 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjjd7\" (UniqueName: \"kubernetes.io/projected/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6-kube-api-access-hjjd7\") on node \"crc\" DevicePath \"\"" Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.483887 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" event={"ID":"e5d24cb7-5c49-44f0-b18f-a09604ee8bb6","Type":"ContainerDied","Data":"1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b"} Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.484335 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1543c2a104ee30c5cb7c48956667587e2cf79018de54be07b36cfc184b97be6b" Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.484048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566534-mfjdt" Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.837531 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:34:04 crc kubenswrapper[5136]: I0320 07:34:04.847648 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566528-p9gfh"] Mar 20 07:34:06 crc kubenswrapper[5136]: I0320 07:34:06.415709 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0663cd7c-704c-4495-8271-f55538649003" path="/var/lib/kubelet/pods/0663cd7c-704c-4495-8271-f55538649003/volumes" Mar 20 07:34:14 crc kubenswrapper[5136]: I0320 07:34:14.397440 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:14 crc kubenswrapper[5136]: E0320 07:34:14.400396 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:25 crc kubenswrapper[5136]: I0320 07:34:25.405737 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:25 crc kubenswrapper[5136]: E0320 07:34:25.406968 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:33 crc kubenswrapper[5136]: I0320 07:34:33.914313 5136 scope.go:117] "RemoveContainer" containerID="8e7514aba4ea3d84ec9496fc84994ced79208352205e65a08cbf2bd32660e7b5" Mar 20 07:34:37 crc kubenswrapper[5136]: I0320 07:34:37.397374 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:37 crc kubenswrapper[5136]: E0320 07:34:37.398208 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:34:51 crc kubenswrapper[5136]: I0320 07:34:51.397642 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:34:51 crc kubenswrapper[5136]: E0320 07:34:51.398669 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:06 crc kubenswrapper[5136]: I0320 07:35:06.396473 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:06 crc kubenswrapper[5136]: E0320 07:35:06.397407 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:21 crc kubenswrapper[5136]: I0320 07:35:21.396959 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:21 crc kubenswrapper[5136]: E0320 07:35:21.397586 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.347386 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:31 crc kubenswrapper[5136]: E0320 07:35:31.347936 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerName="oc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.347971 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerName="oc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.348130 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" containerName="oc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.349281 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.364434 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.420788 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.420941 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.420984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.522218 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.522354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.522383 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.523341 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.523758 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.540548 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"redhat-operators-28pvc\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:31 crc kubenswrapper[5136]: I0320 07:35:31.668485 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:32 crc kubenswrapper[5136]: I0320 07:35:32.081627 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:32 crc kubenswrapper[5136]: I0320 07:35:32.252229 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerStarted","Data":"e1aa85e32df54ec39eacb2972d512c55472988780e7fa2841ba88678618668a0"} Mar 20 07:35:33 crc kubenswrapper[5136]: I0320 07:35:33.260940 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" exitCode=0 Mar 20 07:35:33 crc kubenswrapper[5136]: I0320 07:35:33.261042 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f"} Mar 20 07:35:34 crc kubenswrapper[5136]: I0320 07:35:34.270365 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerStarted","Data":"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f"} Mar 20 07:35:35 crc kubenswrapper[5136]: I0320 07:35:35.278620 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" exitCode=0 Mar 20 07:35:35 crc kubenswrapper[5136]: I0320 07:35:35.278686 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f"} Mar 20 07:35:35 crc kubenswrapper[5136]: I0320 07:35:35.396643 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:35 crc kubenswrapper[5136]: E0320 07:35:35.396940 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:36 crc kubenswrapper[5136]: I0320 07:35:36.290527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerStarted","Data":"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9"} Mar 20 07:35:36 crc kubenswrapper[5136]: I0320 07:35:36.318584 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28pvc" podStartSLOduration=2.8876636270000002 podStartE2EDuration="5.318545863s" podCreationTimestamp="2026-03-20 07:35:31 +0000 UTC" firstStartedPulling="2026-03-20 07:35:33.263774358 +0000 UTC m=+2765.523085519" lastFinishedPulling="2026-03-20 07:35:35.694656604 +0000 UTC m=+2767.953967755" observedRunningTime="2026-03-20 07:35:36.313605126 +0000 UTC m=+2768.572916267" watchObservedRunningTime="2026-03-20 07:35:36.318545863 +0000 UTC m=+2768.577857044" Mar 20 07:35:41 crc kubenswrapper[5136]: I0320 07:35:41.681643 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:41 crc kubenswrapper[5136]: I0320 07:35:41.682141 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:42 crc kubenswrapper[5136]: I0320 07:35:42.732800 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-28pvc" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" probeResult="failure" output=< Mar 20 07:35:42 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:35:42 crc kubenswrapper[5136]: > Mar 20 07:35:46 crc kubenswrapper[5136]: I0320 07:35:46.397269 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:35:46 crc kubenswrapper[5136]: E0320 07:35:46.398027 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:35:51 crc kubenswrapper[5136]: I0320 07:35:51.741766 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:51 crc kubenswrapper[5136]: I0320 07:35:51.827926 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:51 crc kubenswrapper[5136]: I0320 07:35:51.993913 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.456476 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-28pvc" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" containerID="cri-o://860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" gracePeriod=2 Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.809312 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.993327 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") pod \"c7c927c5-116e-433d-b782-51792c8a0ae3\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.993389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") pod \"c7c927c5-116e-433d-b782-51792c8a0ae3\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.993476 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") pod \"c7c927c5-116e-433d-b782-51792c8a0ae3\" (UID: \"c7c927c5-116e-433d-b782-51792c8a0ae3\") " Mar 20 07:35:53 crc kubenswrapper[5136]: I0320 07:35:53.994729 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities" (OuterVolumeSpecName: "utilities") pod "c7c927c5-116e-433d-b782-51792c8a0ae3" (UID: "c7c927c5-116e-433d-b782-51792c8a0ae3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.002625 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2" (OuterVolumeSpecName: "kube-api-access-5fqf2") pod "c7c927c5-116e-433d-b782-51792c8a0ae3" (UID: "c7c927c5-116e-433d-b782-51792c8a0ae3"). InnerVolumeSpecName "kube-api-access-5fqf2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.094907 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fqf2\" (UniqueName: \"kubernetes.io/projected/c7c927c5-116e-433d-b782-51792c8a0ae3-kube-api-access-5fqf2\") on node \"crc\" DevicePath \"\"" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.094950 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.161950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7c927c5-116e-433d-b782-51792c8a0ae3" (UID: "c7c927c5-116e-433d-b782-51792c8a0ae3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.196749 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7c927c5-116e-433d-b782-51792c8a0ae3-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467312 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" exitCode=0 Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9"} Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467661 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28pvc" event={"ID":"c7c927c5-116e-433d-b782-51792c8a0ae3","Type":"ContainerDied","Data":"e1aa85e32df54ec39eacb2972d512c55472988780e7fa2841ba88678618668a0"} Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467443 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28pvc" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.467683 5136 scope.go:117] "RemoveContainer" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.492333 5136 scope.go:117] "RemoveContainer" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.498155 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.502197 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-28pvc"] Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.520271 5136 scope.go:117] "RemoveContainer" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.554410 5136 scope.go:117] "RemoveContainer" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" Mar 20 07:35:54 crc kubenswrapper[5136]: E0320 07:35:54.554741 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9\": container with ID starting with 860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9 not found: ID does not exist" containerID="860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.554772 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9"} err="failed to get container status \"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9\": rpc error: code = NotFound desc = could not find container \"860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9\": container with ID starting with 860f4ad19808a6ee3b25a21d2eb4bd1657ceeab22b48d60c428e37e61775bab9 not found: ID does not exist" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.554791 5136 scope.go:117] "RemoveContainer" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" Mar 20 07:35:54 crc kubenswrapper[5136]: E0320 07:35:54.555421 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f\": container with ID starting with 79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f not found: ID does not exist" containerID="79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.555450 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f"} err="failed to get container status \"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f\": rpc error: code = NotFound desc = could not find container \"79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f\": container with ID starting with 79ec067cd70507e41bdecf4d97e168873173fc5703135e56d033213e96c9483f not found: ID does not exist" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.555464 5136 scope.go:117] "RemoveContainer" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" Mar 20 07:35:54 crc kubenswrapper[5136]: E0320 07:35:54.555709 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f\": container with ID starting with 0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f not found: ID does not exist" containerID="0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f" Mar 20 07:35:54 crc kubenswrapper[5136]: I0320 07:35:54.555730 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f"} err="failed to get container status \"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f\": rpc error: code = NotFound desc = could not find container \"0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f\": container with ID starting with 0492f5627451c1e87797cbec14ab8d381cc460988438d9529b281fa70312b00f not found: ID does not exist" Mar 20 07:35:56 crc kubenswrapper[5136]: I0320 07:35:56.417731 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" path="/var/lib/kubelet/pods/c7c927c5-116e-433d-b782-51792c8a0ae3/volumes" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144235 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:36:00 crc kubenswrapper[5136]: E0320 07:36:00.144836 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-utilities" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144849 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-utilities" Mar 20 07:36:00 crc kubenswrapper[5136]: E0320 07:36:00.144874 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144880 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" Mar 20 07:36:00 crc kubenswrapper[5136]: E0320 07:36:00.144890 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-content" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.144896 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="extract-content" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.145010 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7c927c5-116e-433d-b782-51792c8a0ae3" containerName="registry-server" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.145460 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.149857 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.150568 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.150742 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.158741 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.188623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"auto-csr-approver-29566536-gdck4\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.289298 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"auto-csr-approver-29566536-gdck4\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.310248 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"auto-csr-approver-29566536-gdck4\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.477216 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:00 crc kubenswrapper[5136]: I0320 07:36:00.733376 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:36:01 crc kubenswrapper[5136]: I0320 07:36:01.396941 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:36:01 crc kubenswrapper[5136]: E0320 07:36:01.397334 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:36:01 crc kubenswrapper[5136]: I0320 07:36:01.530878 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566536-gdck4" event={"ID":"52f90699-ed0d-4f94-ac8f-0710a3df7d1d","Type":"ContainerStarted","Data":"90713c50ab758e3ee9df428264a1212b05faa55593554495076cc4b56e2cc6f7"} Mar 20 07:36:02 crc kubenswrapper[5136]: I0320 07:36:02.541136 5136 generic.go:334] "Generic (PLEG): container finished" podID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerID="35330414dc071593a3b58b60db6f11d64d8df670930fd1d800151dee73ac1250" exitCode=0 Mar 20 07:36:02 crc kubenswrapper[5136]: I0320 07:36:02.541200 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566536-gdck4" event={"ID":"52f90699-ed0d-4f94-ac8f-0710a3df7d1d","Type":"ContainerDied","Data":"35330414dc071593a3b58b60db6f11d64d8df670930fd1d800151dee73ac1250"} Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.824609 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.842738 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") pod \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\" (UID: \"52f90699-ed0d-4f94-ac8f-0710a3df7d1d\") " Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.849006 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45" (OuterVolumeSpecName: "kube-api-access-fbw45") pod "52f90699-ed0d-4f94-ac8f-0710a3df7d1d" (UID: "52f90699-ed0d-4f94-ac8f-0710a3df7d1d"). InnerVolumeSpecName "kube-api-access-fbw45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:36:03 crc kubenswrapper[5136]: I0320 07:36:03.944611 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbw45\" (UniqueName: \"kubernetes.io/projected/52f90699-ed0d-4f94-ac8f-0710a3df7d1d-kube-api-access-fbw45\") on node \"crc\" DevicePath \"\"" Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.563942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566536-gdck4" event={"ID":"52f90699-ed0d-4f94-ac8f-0710a3df7d1d","Type":"ContainerDied","Data":"90713c50ab758e3ee9df428264a1212b05faa55593554495076cc4b56e2cc6f7"} Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.563995 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90713c50ab758e3ee9df428264a1212b05faa55593554495076cc4b56e2cc6f7" Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.564071 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566536-gdck4" Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.925694 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:36:04 crc kubenswrapper[5136]: I0320 07:36:04.934140 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566530-wht58"] Mar 20 07:36:06 crc kubenswrapper[5136]: I0320 07:36:06.409603 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d0faa53-8471-40c0-a2ed-ef66d5b66e72" path="/var/lib/kubelet/pods/2d0faa53-8471-40c0-a2ed-ef66d5b66e72/volumes" Mar 20 07:36:13 crc kubenswrapper[5136]: I0320 07:36:13.396971 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:36:13 crc kubenswrapper[5136]: E0320 07:36:13.398647 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:36:26 crc kubenswrapper[5136]: I0320 07:36:26.396445 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:36:26 crc kubenswrapper[5136]: I0320 07:36:26.756642 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322"} Mar 20 07:36:34 crc kubenswrapper[5136]: I0320 07:36:34.051516 5136 scope.go:117] "RemoveContainer" containerID="a0171e6422989bbfb70e3a76ced8595e932c612b43527acfa116a80d4b912265" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.149518 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:38:00 crc kubenswrapper[5136]: E0320 07:38:00.150532 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerName="oc" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.150552 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerName="oc" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.150796 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" containerName="oc" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.151440 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.156020 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.156237 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.156377 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.172665 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.225680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"auto-csr-approver-29566538-d9gdj\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.326592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"auto-csr-approver-29566538-d9gdj\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.344730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"auto-csr-approver-29566538-d9gdj\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.474919 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.967553 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:38:00 crc kubenswrapper[5136]: I0320 07:38:00.970083 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:38:01 crc kubenswrapper[5136]: I0320 07:38:01.566344 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" event={"ID":"92f22f97-0d01-4c04-8d7c-8f0ec81c1559","Type":"ContainerStarted","Data":"75f814047295dea1a3e10b2efbf1c95fc9c334163f82f2d34bf9c79441a48819"} Mar 20 07:38:02 crc kubenswrapper[5136]: I0320 07:38:02.576022 5136 generic.go:334] "Generic (PLEG): container finished" podID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerID="085754035e9717018bfa13a75fa5176cf36ff83294b4916d7b6b6031d31b5c22" exitCode=0 Mar 20 07:38:02 crc kubenswrapper[5136]: I0320 07:38:02.576143 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" event={"ID":"92f22f97-0d01-4c04-8d7c-8f0ec81c1559","Type":"ContainerDied","Data":"085754035e9717018bfa13a75fa5176cf36ff83294b4916d7b6b6031d31b5c22"} Mar 20 07:38:03 crc kubenswrapper[5136]: I0320 07:38:03.885047 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:03 crc kubenswrapper[5136]: I0320 07:38:03.979566 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") pod \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\" (UID: \"92f22f97-0d01-4c04-8d7c-8f0ec81c1559\") " Mar 20 07:38:03 crc kubenswrapper[5136]: I0320 07:38:03.990575 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92" (OuterVolumeSpecName: "kube-api-access-rgm92") pod "92f22f97-0d01-4c04-8d7c-8f0ec81c1559" (UID: "92f22f97-0d01-4c04-8d7c-8f0ec81c1559"). InnerVolumeSpecName "kube-api-access-rgm92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.080799 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgm92\" (UniqueName: \"kubernetes.io/projected/92f22f97-0d01-4c04-8d7c-8f0ec81c1559-kube-api-access-rgm92\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.594993 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" event={"ID":"92f22f97-0d01-4c04-8d7c-8f0ec81c1559","Type":"ContainerDied","Data":"75f814047295dea1a3e10b2efbf1c95fc9c334163f82f2d34bf9c79441a48819"} Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.595596 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f814047295dea1a3e10b2efbf1c95fc9c334163f82f2d34bf9c79441a48819" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.595109 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566538-d9gdj" Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.991806 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:38:04 crc kubenswrapper[5136]: I0320 07:38:04.998904 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566532-twtd9"] Mar 20 07:38:06 crc kubenswrapper[5136]: I0320 07:38:06.409751 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4c27187-55d8-4db4-9cae-d77617300a14" path="/var/lib/kubelet/pods/e4c27187-55d8-4db4-9cae-d77617300a14/volumes" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.963110 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:30 crc kubenswrapper[5136]: E0320 07:38:30.964296 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerName="oc" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.964324 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerName="oc" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.964584 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" containerName="oc" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.966516 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:30 crc kubenswrapper[5136]: I0320 07:38:30.971778 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.025928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.025984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.026059 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.127572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.127678 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.127698 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.128514 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.128535 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.152918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"redhat-marketplace-rnbzz\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.299892 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.786196 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:31 crc kubenswrapper[5136]: I0320 07:38:31.843433 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerStarted","Data":"695de2aa0a9f443217d9e1516f6c50bddda8614b8b750d85a0fcb48982677f15"} Mar 20 07:38:32 crc kubenswrapper[5136]: I0320 07:38:32.854864 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" exitCode=0 Mar 20 07:38:32 crc kubenswrapper[5136]: I0320 07:38:32.854949 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9"} Mar 20 07:38:33 crc kubenswrapper[5136]: I0320 07:38:33.868527 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerStarted","Data":"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5"} Mar 20 07:38:34 crc kubenswrapper[5136]: I0320 07:38:34.152960 5136 scope.go:117] "RemoveContainer" containerID="5c9dbddef3617b2a1a9f29b6615bed2e74b730bf03006a402fac0528653fa989" Mar 20 07:38:34 crc kubenswrapper[5136]: I0320 07:38:34.886745 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" exitCode=0 Mar 20 07:38:34 crc kubenswrapper[5136]: I0320 07:38:34.886871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5"} Mar 20 07:38:35 crc kubenswrapper[5136]: I0320 07:38:35.898643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerStarted","Data":"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052"} Mar 20 07:38:35 crc kubenswrapper[5136]: I0320 07:38:35.942937 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rnbzz" podStartSLOduration=3.157809826 podStartE2EDuration="5.942913283s" podCreationTimestamp="2026-03-20 07:38:30 +0000 UTC" firstStartedPulling="2026-03-20 07:38:32.856389079 +0000 UTC m=+2945.115700240" lastFinishedPulling="2026-03-20 07:38:35.641492516 +0000 UTC m=+2947.900803697" observedRunningTime="2026-03-20 07:38:35.930483795 +0000 UTC m=+2948.189794986" watchObservedRunningTime="2026-03-20 07:38:35.942913283 +0000 UTC m=+2948.202224474" Mar 20 07:38:41 crc kubenswrapper[5136]: I0320 07:38:41.301205 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:41 crc kubenswrapper[5136]: I0320 07:38:41.302052 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:41 crc kubenswrapper[5136]: I0320 07:38:41.344403 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:42 crc kubenswrapper[5136]: I0320 07:38:42.030555 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:42 crc kubenswrapper[5136]: I0320 07:38:42.093270 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:43 crc kubenswrapper[5136]: I0320 07:38:43.994706 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rnbzz" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" containerID="cri-o://4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" gracePeriod=2 Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.528349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.624015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") pod \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.624107 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") pod \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.624168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") pod \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\" (UID: \"5e0cc814-f9aa-4a89-ba33-a9729f19d76e\") " Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.626075 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities" (OuterVolumeSpecName: "utilities") pod "5e0cc814-f9aa-4a89-ba33-a9729f19d76e" (UID: "5e0cc814-f9aa-4a89-ba33-a9729f19d76e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.631005 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92" (OuterVolumeSpecName: "kube-api-access-d7g92") pod "5e0cc814-f9aa-4a89-ba33-a9729f19d76e" (UID: "5e0cc814-f9aa-4a89-ba33-a9729f19d76e"). InnerVolumeSpecName "kube-api-access-d7g92". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.661680 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5e0cc814-f9aa-4a89-ba33-a9729f19d76e" (UID: "5e0cc814-f9aa-4a89-ba33-a9729f19d76e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.725110 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.725149 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:44 crc kubenswrapper[5136]: I0320 07:38:44.725159 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7g92\" (UniqueName: \"kubernetes.io/projected/5e0cc814-f9aa-4a89-ba33-a9729f19d76e-kube-api-access-d7g92\") on node \"crc\" DevicePath \"\"" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.020427 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" exitCode=0 Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.020563 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rnbzz" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.020547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052"} Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.021347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rnbzz" event={"ID":"5e0cc814-f9aa-4a89-ba33-a9729f19d76e","Type":"ContainerDied","Data":"695de2aa0a9f443217d9e1516f6c50bddda8614b8b750d85a0fcb48982677f15"} Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.021412 5136 scope.go:117] "RemoveContainer" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.077907 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.083946 5136 scope.go:117] "RemoveContainer" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.091760 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rnbzz"] Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.121081 5136 scope.go:117] "RemoveContainer" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.150291 5136 scope.go:117] "RemoveContainer" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" Mar 20 07:38:45 crc kubenswrapper[5136]: E0320 07:38:45.151002 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052\": container with ID starting with 4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052 not found: ID does not exist" containerID="4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151041 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052"} err="failed to get container status \"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052\": rpc error: code = NotFound desc = could not find container \"4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052\": container with ID starting with 4f20543aa346d6e1e4506cd7b71cc9bab0c7b32446c771a6fc48f5f4b4445052 not found: ID does not exist" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151084 5136 scope.go:117] "RemoveContainer" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" Mar 20 07:38:45 crc kubenswrapper[5136]: E0320 07:38:45.151562 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5\": container with ID starting with 5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5 not found: ID does not exist" containerID="5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151604 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5"} err="failed to get container status \"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5\": rpc error: code = NotFound desc = could not find container \"5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5\": container with ID starting with 5bc1f0e48442347b157d4a68df6a9bbf2c3efc531d8802f64220c6b8565cfae5 not found: ID does not exist" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.151630 5136 scope.go:117] "RemoveContainer" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" Mar 20 07:38:45 crc kubenswrapper[5136]: E0320 07:38:45.152087 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9\": container with ID starting with 8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9 not found: ID does not exist" containerID="8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.152136 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9"} err="failed to get container status \"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9\": rpc error: code = NotFound desc = could not find container \"8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9\": container with ID starting with 8cd7fccca82be14dd1ed6de2bea6154c82344b596e72f18152f074e61a57cec9 not found: ID does not exist" Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.822486 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:38:45 crc kubenswrapper[5136]: I0320 07:38:45.822551 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:38:46 crc kubenswrapper[5136]: I0320 07:38:46.407136 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" path="/var/lib/kubelet/pods/5e0cc814-f9aa-4a89-ba33-a9729f19d76e/volumes" Mar 20 07:39:15 crc kubenswrapper[5136]: I0320 07:39:15.822126 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:39:15 crc kubenswrapper[5136]: I0320 07:39:15.822731 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.822107 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.822756 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.822848 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.823575 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:39:45 crc kubenswrapper[5136]: I0320 07:39:45.823667 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322" gracePeriod=600 Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.515794 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322" exitCode=0 Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.515836 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322"} Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.516158 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17"} Mar 20 07:39:46 crc kubenswrapper[5136]: I0320 07:39:46.516180 5136 scope.go:117] "RemoveContainer" containerID="8bdd25e3705fae9d20126ff1a257b5cfa61e14bc3ca3e9c3e86c4ac9a9ba5365" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.152144 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:40:00 crc kubenswrapper[5136]: E0320 07:40:00.153228 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-content" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153243 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-content" Mar 20 07:40:00 crc kubenswrapper[5136]: E0320 07:40:00.153256 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153262 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" Mar 20 07:40:00 crc kubenswrapper[5136]: E0320 07:40:00.153269 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-utilities" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153276 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="extract-utilities" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153442 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e0cc814-f9aa-4a89-ba33-a9729f19d76e" containerName="registry-server" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.153860 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.159843 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.159985 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.160059 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.167592 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.334207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"auto-csr-approver-29566540-q2mqq\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.436509 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"auto-csr-approver-29566540-q2mqq\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.459916 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"auto-csr-approver-29566540-q2mqq\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.490924 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:00 crc kubenswrapper[5136]: I0320 07:40:00.932410 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:40:01 crc kubenswrapper[5136]: I0320 07:40:01.637387 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" event={"ID":"13ab4686-525c-4931-93d9-5b71ec6644ee","Type":"ContainerStarted","Data":"f9695eeb9c813c7c883a386e4570bf2ffddc4a88a95f2ae1f421545adb211333"} Mar 20 07:40:02 crc kubenswrapper[5136]: I0320 07:40:02.648168 5136 generic.go:334] "Generic (PLEG): container finished" podID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerID="252260f3a58979042bf8b21321cd53a2147f00019a6012d4dfcab45147ceb6a9" exitCode=0 Mar 20 07:40:02 crc kubenswrapper[5136]: I0320 07:40:02.648268 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" event={"ID":"13ab4686-525c-4931-93d9-5b71ec6644ee","Type":"ContainerDied","Data":"252260f3a58979042bf8b21321cd53a2147f00019a6012d4dfcab45147ceb6a9"} Mar 20 07:40:03 crc kubenswrapper[5136]: I0320 07:40:03.900140 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:03 crc kubenswrapper[5136]: I0320 07:40:03.990466 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") pod \"13ab4686-525c-4931-93d9-5b71ec6644ee\" (UID: \"13ab4686-525c-4931-93d9-5b71ec6644ee\") " Mar 20 07:40:03 crc kubenswrapper[5136]: I0320 07:40:03.999166 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp" (OuterVolumeSpecName: "kube-api-access-tflrp") pod "13ab4686-525c-4931-93d9-5b71ec6644ee" (UID: "13ab4686-525c-4931-93d9-5b71ec6644ee"). InnerVolumeSpecName "kube-api-access-tflrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.091758 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflrp\" (UniqueName: \"kubernetes.io/projected/13ab4686-525c-4931-93d9-5b71ec6644ee-kube-api-access-tflrp\") on node \"crc\" DevicePath \"\"" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.663178 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" event={"ID":"13ab4686-525c-4931-93d9-5b71ec6644ee","Type":"ContainerDied","Data":"f9695eeb9c813c7c883a386e4570bf2ffddc4a88a95f2ae1f421545adb211333"} Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.663445 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9695eeb9c813c7c883a386e4570bf2ffddc4a88a95f2ae1f421545adb211333" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.663238 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566540-q2mqq" Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.960340 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:40:04 crc kubenswrapper[5136]: I0320 07:40:04.965488 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566534-mfjdt"] Mar 20 07:40:06 crc kubenswrapper[5136]: I0320 07:40:06.412664 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5d24cb7-5c49-44f0-b18f-a09604ee8bb6" path="/var/lib/kubelet/pods/e5d24cb7-5c49-44f0-b18f-a09604ee8bb6/volumes" Mar 20 07:40:34 crc kubenswrapper[5136]: I0320 07:40:34.282149 5136 scope.go:117] "RemoveContainer" containerID="8642a4da6cd6ed0a2fb9cab568877c9f7b33cdf51b38940824b385ba7da2a860" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.157476 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:42:00 crc kubenswrapper[5136]: E0320 07:42:00.158524 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerName="oc" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.158540 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerName="oc" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.158727 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" containerName="oc" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.159235 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.163535 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.165384 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.166110 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.172693 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.227401 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"auto-csr-approver-29566542-pds9m\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.328893 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"auto-csr-approver-29566542-pds9m\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.352690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"auto-csr-approver-29566542-pds9m\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.487562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:00 crc kubenswrapper[5136]: I0320 07:42:00.881891 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:42:01 crc kubenswrapper[5136]: I0320 07:42:01.403132 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566542-pds9m" event={"ID":"17d864d8-8238-4e66-b9ac-d03d95596254","Type":"ContainerStarted","Data":"d4dd928fd390e38d6bae8bb7b8e974d5de68a461cdcd885e35070871304c16e4"} Mar 20 07:42:02 crc kubenswrapper[5136]: I0320 07:42:02.408862 5136 generic.go:334] "Generic (PLEG): container finished" podID="17d864d8-8238-4e66-b9ac-d03d95596254" containerID="7d44df1c73e9c1d9108526abbe2353b5337e03d920bac4de2652a37d15133fc6" exitCode=0 Mar 20 07:42:02 crc kubenswrapper[5136]: I0320 07:42:02.408914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566542-pds9m" event={"ID":"17d864d8-8238-4e66-b9ac-d03d95596254","Type":"ContainerDied","Data":"7d44df1c73e9c1d9108526abbe2353b5337e03d920bac4de2652a37d15133fc6"} Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.751177 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.892669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") pod \"17d864d8-8238-4e66-b9ac-d03d95596254\" (UID: \"17d864d8-8238-4e66-b9ac-d03d95596254\") " Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.898701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4" (OuterVolumeSpecName: "kube-api-access-gcxz4") pod "17d864d8-8238-4e66-b9ac-d03d95596254" (UID: "17d864d8-8238-4e66-b9ac-d03d95596254"). InnerVolumeSpecName "kube-api-access-gcxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:42:03 crc kubenswrapper[5136]: I0320 07:42:03.994400 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcxz4\" (UniqueName: \"kubernetes.io/projected/17d864d8-8238-4e66-b9ac-d03d95596254-kube-api-access-gcxz4\") on node \"crc\" DevicePath \"\"" Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.432299 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566542-pds9m" event={"ID":"17d864d8-8238-4e66-b9ac-d03d95596254","Type":"ContainerDied","Data":"d4dd928fd390e38d6bae8bb7b8e974d5de68a461cdcd885e35070871304c16e4"} Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.432896 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4dd928fd390e38d6bae8bb7b8e974d5de68a461cdcd885e35070871304c16e4" Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.432996 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566542-pds9m" Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.872218 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:42:04 crc kubenswrapper[5136]: I0320 07:42:04.884559 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566536-gdck4"] Mar 20 07:42:06 crc kubenswrapper[5136]: I0320 07:42:06.411135 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f90699-ed0d-4f94-ac8f-0710a3df7d1d" path="/var/lib/kubelet/pods/52f90699-ed0d-4f94-ac8f-0710a3df7d1d/volumes" Mar 20 07:42:15 crc kubenswrapper[5136]: I0320 07:42:15.822619 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:42:15 crc kubenswrapper[5136]: I0320 07:42:15.823305 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:42:34 crc kubenswrapper[5136]: I0320 07:42:34.379204 5136 scope.go:117] "RemoveContainer" containerID="35330414dc071593a3b58b60db6f11d64d8df670930fd1d800151dee73ac1250" Mar 20 07:42:45 crc kubenswrapper[5136]: I0320 07:42:45.822128 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:42:45 crc kubenswrapper[5136]: I0320 07:42:45.822708 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.734626 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:03 crc kubenswrapper[5136]: E0320 07:43:03.735294 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" containerName="oc" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.735307 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" containerName="oc" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.735445 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" containerName="oc" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.736326 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.747153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.857289 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.857462 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.857521 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959223 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959278 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959354 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.959920 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.960093 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:03 crc kubenswrapper[5136]: I0320 07:43:03.988305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"community-operators-cn4tr\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.066529 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.608214 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.965657 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" exitCode=0 Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.965750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786"} Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.966018 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerStarted","Data":"a803bd4eb38a46416dcf94445aab13c2f8df6b784f82441df8622ec1b09e2a61"} Mar 20 07:43:04 crc kubenswrapper[5136]: I0320 07:43:04.967906 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:43:05 crc kubenswrapper[5136]: I0320 07:43:05.975802 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerStarted","Data":"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa"} Mar 20 07:43:06 crc kubenswrapper[5136]: I0320 07:43:06.990274 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" exitCode=0 Mar 20 07:43:06 crc kubenswrapper[5136]: I0320 07:43:06.990352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa"} Mar 20 07:43:08 crc kubenswrapper[5136]: I0320 07:43:08.000322 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerStarted","Data":"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a"} Mar 20 07:43:08 crc kubenswrapper[5136]: I0320 07:43:08.025197 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn4tr" podStartSLOduration=2.322474844 podStartE2EDuration="5.025172527s" podCreationTimestamp="2026-03-20 07:43:03 +0000 UTC" firstStartedPulling="2026-03-20 07:43:04.967607151 +0000 UTC m=+3217.226918302" lastFinishedPulling="2026-03-20 07:43:07.670304834 +0000 UTC m=+3219.929615985" observedRunningTime="2026-03-20 07:43:08.017358136 +0000 UTC m=+3220.276669327" watchObservedRunningTime="2026-03-20 07:43:08.025172527 +0000 UTC m=+3220.284483688" Mar 20 07:43:14 crc kubenswrapper[5136]: I0320 07:43:14.068165 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:14 crc kubenswrapper[5136]: I0320 07:43:14.068929 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:14 crc kubenswrapper[5136]: I0320 07:43:14.120171 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.130312 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.175280 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.822119 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.822206 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.822277 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.823272 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:43:15 crc kubenswrapper[5136]: I0320 07:43:15.823398 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" gracePeriod=600 Mar 20 07:43:15 crc kubenswrapper[5136]: E0320 07:43:15.946014 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.059858 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" exitCode=0 Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.059905 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17"} Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.059990 5136 scope.go:117] "RemoveContainer" containerID="e931a73800d6f8bfc60e4afb965ee2c2a8a596df85cf1a7cd9146234535ad322" Mar 20 07:43:16 crc kubenswrapper[5136]: I0320 07:43:16.060844 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:16 crc kubenswrapper[5136]: E0320 07:43:16.061219 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.071512 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn4tr" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" containerID="cri-o://82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" gracePeriod=2 Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.454510 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.586780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") pod \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.586876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") pod \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.586918 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") pod \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\" (UID: \"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff\") " Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.588067 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities" (OuterVolumeSpecName: "utilities") pod "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" (UID: "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.591338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv" (OuterVolumeSpecName: "kube-api-access-gfwgv") pod "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" (UID: "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff"). InnerVolumeSpecName "kube-api-access-gfwgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.637469 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" (UID: "a4705d3e-bf23-43ed-92f2-b8c9bcffbbff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.689063 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfwgv\" (UniqueName: \"kubernetes.io/projected/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-kube-api-access-gfwgv\") on node \"crc\" DevicePath \"\"" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.689253 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:43:17 crc kubenswrapper[5136]: I0320 07:43:17.689336 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084044 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" exitCode=0 Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084099 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a"} Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084153 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn4tr" event={"ID":"a4705d3e-bf23-43ed-92f2-b8c9bcffbbff","Type":"ContainerDied","Data":"a803bd4eb38a46416dcf94445aab13c2f8df6b784f82441df8622ec1b09e2a61"} Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084188 5136 scope.go:117] "RemoveContainer" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.084188 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn4tr" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.115906 5136 scope.go:117] "RemoveContainer" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.149523 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.159920 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn4tr"] Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.176503 5136 scope.go:117] "RemoveContainer" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.205396 5136 scope.go:117] "RemoveContainer" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" Mar 20 07:43:18 crc kubenswrapper[5136]: E0320 07:43:18.206192 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a\": container with ID starting with 82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a not found: ID does not exist" containerID="82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.206261 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a"} err="failed to get container status \"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a\": rpc error: code = NotFound desc = could not find container \"82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a\": container with ID starting with 82c6a2ac936962fc36a01eb34e2885f50ddac2a35b7e24ce6b536e3e1e54029a not found: ID does not exist" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.206311 5136 scope.go:117] "RemoveContainer" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" Mar 20 07:43:18 crc kubenswrapper[5136]: E0320 07:43:18.206788 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa\": container with ID starting with 0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa not found: ID does not exist" containerID="0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.206938 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa"} err="failed to get container status \"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa\": rpc error: code = NotFound desc = could not find container \"0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa\": container with ID starting with 0bf569df812b3903f1ee6037acdc6c6ffb476e9af9ef8fe3531a8f25781111aa not found: ID does not exist" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.207038 5136 scope.go:117] "RemoveContainer" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" Mar 20 07:43:18 crc kubenswrapper[5136]: E0320 07:43:18.207535 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786\": container with ID starting with 2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786 not found: ID does not exist" containerID="2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.207622 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786"} err="failed to get container status \"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786\": rpc error: code = NotFound desc = could not find container \"2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786\": container with ID starting with 2c57456fcda577800cc60dc3fe91ba338fd3b302a34f469143e23e947af26786 not found: ID does not exist" Mar 20 07:43:18 crc kubenswrapper[5136]: I0320 07:43:18.412673 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" path="/var/lib/kubelet/pods/a4705d3e-bf23-43ed-92f2-b8c9bcffbbff/volumes" Mar 20 07:43:31 crc kubenswrapper[5136]: I0320 07:43:31.397778 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:31 crc kubenswrapper[5136]: E0320 07:43:31.398549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:44 crc kubenswrapper[5136]: I0320 07:43:44.397798 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:44 crc kubenswrapper[5136]: E0320 07:43:44.400923 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:43:55 crc kubenswrapper[5136]: I0320 07:43:55.397250 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:43:55 crc kubenswrapper[5136]: E0320 07:43:55.397951 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.175929 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:44:00 crc kubenswrapper[5136]: E0320 07:44:00.176568 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176593 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" Mar 20 07:44:00 crc kubenswrapper[5136]: E0320 07:44:00.176609 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-utilities" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176617 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-utilities" Mar 20 07:44:00 crc kubenswrapper[5136]: E0320 07:44:00.176628 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-content" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176637 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="extract-content" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.176778 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4705d3e-bf23-43ed-92f2-b8c9bcffbbff" containerName="registry-server" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.177349 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.183781 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.183943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.184336 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.187652 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.360714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"auto-csr-approver-29566544-2kjwj\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.462572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"auto-csr-approver-29566544-2kjwj\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.485996 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"auto-csr-approver-29566544-2kjwj\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.496120 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:00 crc kubenswrapper[5136]: I0320 07:44:00.876736 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:44:01 crc kubenswrapper[5136]: I0320 07:44:01.444204 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" event={"ID":"26c6802e-62e8-47ba-b964-fde9f92ca8ef","Type":"ContainerStarted","Data":"d5d5ee818bbda6f5db1f63e1b0ea3e0da7baf51b71f50bdcc585d3558777fa95"} Mar 20 07:44:02 crc kubenswrapper[5136]: I0320 07:44:02.461174 5136 generic.go:334] "Generic (PLEG): container finished" podID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerID="340e29815927db9adaf364543d249649b4c4d562d5c4326419747f3242c8e07d" exitCode=0 Mar 20 07:44:02 crc kubenswrapper[5136]: I0320 07:44:02.461241 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" event={"ID":"26c6802e-62e8-47ba-b964-fde9f92ca8ef","Type":"ContainerDied","Data":"340e29815927db9adaf364543d249649b4c4d562d5c4326419747f3242c8e07d"} Mar 20 07:44:03 crc kubenswrapper[5136]: I0320 07:44:03.772693 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:03 crc kubenswrapper[5136]: I0320 07:44:03.921528 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") pod \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\" (UID: \"26c6802e-62e8-47ba-b964-fde9f92ca8ef\") " Mar 20 07:44:03 crc kubenswrapper[5136]: I0320 07:44:03.928082 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4" (OuterVolumeSpecName: "kube-api-access-k56w4") pod "26c6802e-62e8-47ba-b964-fde9f92ca8ef" (UID: "26c6802e-62e8-47ba-b964-fde9f92ca8ef"). InnerVolumeSpecName "kube-api-access-k56w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.023146 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k56w4\" (UniqueName: \"kubernetes.io/projected/26c6802e-62e8-47ba-b964-fde9f92ca8ef-kube-api-access-k56w4\") on node \"crc\" DevicePath \"\"" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.479054 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" event={"ID":"26c6802e-62e8-47ba-b964-fde9f92ca8ef","Type":"ContainerDied","Data":"d5d5ee818bbda6f5db1f63e1b0ea3e0da7baf51b71f50bdcc585d3558777fa95"} Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.479127 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d5ee818bbda6f5db1f63e1b0ea3e0da7baf51b71f50bdcc585d3558777fa95" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.479164 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566544-2kjwj" Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.833513 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:44:04 crc kubenswrapper[5136]: I0320 07:44:04.840067 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566538-d9gdj"] Mar 20 07:44:06 crc kubenswrapper[5136]: I0320 07:44:06.407434 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92f22f97-0d01-4c04-8d7c-8f0ec81c1559" path="/var/lib/kubelet/pods/92f22f97-0d01-4c04-8d7c-8f0ec81c1559/volumes" Mar 20 07:44:07 crc kubenswrapper[5136]: I0320 07:44:07.397378 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:07 crc kubenswrapper[5136]: E0320 07:44:07.397686 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:21 crc kubenswrapper[5136]: I0320 07:44:21.397128 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:21 crc kubenswrapper[5136]: E0320 07:44:21.397579 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:34 crc kubenswrapper[5136]: I0320 07:44:34.501121 5136 scope.go:117] "RemoveContainer" containerID="085754035e9717018bfa13a75fa5176cf36ff83294b4916d7b6b6031d31b5c22" Mar 20 07:44:35 crc kubenswrapper[5136]: I0320 07:44:35.397981 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:35 crc kubenswrapper[5136]: E0320 07:44:35.398464 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:48 crc kubenswrapper[5136]: I0320 07:44:48.401125 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:44:48 crc kubenswrapper[5136]: E0320 07:44:48.402104 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.974828 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:44:55 crc kubenswrapper[5136]: E0320 07:44:55.975466 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerName="oc" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.975481 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerName="oc" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.975664 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" containerName="oc" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.976952 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:55 crc kubenswrapper[5136]: I0320 07:44:55.992892 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.100103 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.100167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.100224 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201026 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.201695 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.223264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"certified-operators-mzr5h\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.295340 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.755257 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:44:56 crc kubenswrapper[5136]: I0320 07:44:56.851759 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerStarted","Data":"b13ec984c4bc16d74f2087182c5cdff319920af93d300e004f263ca60bc836fb"} Mar 20 07:44:57 crc kubenswrapper[5136]: I0320 07:44:57.863220 5136 generic.go:334] "Generic (PLEG): container finished" podID="32051439-253c-4626-bc98-701985ff87cf" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" exitCode=0 Mar 20 07:44:57 crc kubenswrapper[5136]: I0320 07:44:57.863319 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd"} Mar 20 07:44:58 crc kubenswrapper[5136]: I0320 07:44:58.873988 5136 generic.go:334] "Generic (PLEG): container finished" podID="32051439-253c-4626-bc98-701985ff87cf" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" exitCode=0 Mar 20 07:44:58 crc kubenswrapper[5136]: I0320 07:44:58.874032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20"} Mar 20 07:44:59 crc kubenswrapper[5136]: I0320 07:44:59.887443 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerStarted","Data":"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec"} Mar 20 07:44:59 crc kubenswrapper[5136]: I0320 07:44:59.910228 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mzr5h" podStartSLOduration=3.522531322 podStartE2EDuration="4.910196633s" podCreationTimestamp="2026-03-20 07:44:55 +0000 UTC" firstStartedPulling="2026-03-20 07:44:57.865215351 +0000 UTC m=+3330.124526502" lastFinishedPulling="2026-03-20 07:44:59.252880662 +0000 UTC m=+3331.512191813" observedRunningTime="2026-03-20 07:44:59.906105636 +0000 UTC m=+3332.165416787" watchObservedRunningTime="2026-03-20 07:44:59.910196633 +0000 UTC m=+3332.169507834" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.170465 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.172029 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.174280 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.174852 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.182077 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.288636 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.288775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.288857 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.390082 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.390206 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.392128 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.392183 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.403875 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.418310 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"collect-profiles-29566545-r7zm7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.495515 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:00 crc kubenswrapper[5136]: I0320 07:45:00.945241 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.396276 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:01 crc kubenswrapper[5136]: E0320 07:45:01.396844 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.901365 5136 generic.go:334] "Generic (PLEG): container finished" podID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerID="db23fd78398ebb125a153768bba0437d8fa09615fe8803585f26e9cdf330d2a9" exitCode=0 Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.901423 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" event={"ID":"eb32e01f-d49f-4ba1-a1d4-c693765737e7","Type":"ContainerDied","Data":"db23fd78398ebb125a153768bba0437d8fa09615fe8803585f26e9cdf330d2a9"} Mar 20 07:45:01 crc kubenswrapper[5136]: I0320 07:45:01.901452 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" event={"ID":"eb32e01f-d49f-4ba1-a1d4-c693765737e7","Type":"ContainerStarted","Data":"e4cf3b4f21b7c971fc5586b70cdc1f18a9e7f6d1a0f3c953fd28b9795ef940c7"} Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.226009 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.229699 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") pod \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.229757 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") pod \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.229874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") pod \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\" (UID: \"eb32e01f-d49f-4ba1-a1d4-c693765737e7\") " Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.230416 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "eb32e01f-d49f-4ba1-a1d4-c693765737e7" (UID: "eb32e01f-d49f-4ba1-a1d4-c693765737e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.234261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eb32e01f-d49f-4ba1-a1d4-c693765737e7" (UID: "eb32e01f-d49f-4ba1-a1d4-c693765737e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.234518 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc" (OuterVolumeSpecName: "kube-api-access-vzzpc") pod "eb32e01f-d49f-4ba1-a1d4-c693765737e7" (UID: "eb32e01f-d49f-4ba1-a1d4-c693765737e7"). InnerVolumeSpecName "kube-api-access-vzzpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.330530 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzzpc\" (UniqueName: \"kubernetes.io/projected/eb32e01f-d49f-4ba1-a1d4-c693765737e7-kube-api-access-vzzpc\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.330565 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eb32e01f-d49f-4ba1-a1d4-c693765737e7-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.330579 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eb32e01f-d49f-4ba1-a1d4-c693765737e7-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.916339 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" event={"ID":"eb32e01f-d49f-4ba1-a1d4-c693765737e7","Type":"ContainerDied","Data":"e4cf3b4f21b7c971fc5586b70cdc1f18a9e7f6d1a0f3c953fd28b9795ef940c7"} Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.916381 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4cf3b4f21b7c971fc5586b70cdc1f18a9e7f6d1a0f3c953fd28b9795ef940c7" Mar 20 07:45:03 crc kubenswrapper[5136]: I0320 07:45:03.916391 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7" Mar 20 07:45:04 crc kubenswrapper[5136]: I0320 07:45:04.306522 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:45:04 crc kubenswrapper[5136]: I0320 07:45:04.314851 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566500-ljqvj"] Mar 20 07:45:04 crc kubenswrapper[5136]: I0320 07:45:04.417521 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd400575-ef96-4721-b617-29c85991f7f0" path="/var/lib/kubelet/pods/cd400575-ef96-4721-b617-29c85991f7f0/volumes" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.296421 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.296486 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.405898 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:06 crc kubenswrapper[5136]: I0320 07:45:06.984716 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:07 crc kubenswrapper[5136]: I0320 07:45:07.035794 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:45:08 crc kubenswrapper[5136]: I0320 07:45:08.955899 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mzr5h" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" containerID="cri-o://2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" gracePeriod=2 Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.355048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.441526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") pod \"32051439-253c-4626-bc98-701985ff87cf\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.441712 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") pod \"32051439-253c-4626-bc98-701985ff87cf\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.441795 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") pod \"32051439-253c-4626-bc98-701985ff87cf\" (UID: \"32051439-253c-4626-bc98-701985ff87cf\") " Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.442760 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities" (OuterVolumeSpecName: "utilities") pod "32051439-253c-4626-bc98-701985ff87cf" (UID: "32051439-253c-4626-bc98-701985ff87cf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.443608 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.455155 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9" (OuterVolumeSpecName: "kube-api-access-76lm9") pod "32051439-253c-4626-bc98-701985ff87cf" (UID: "32051439-253c-4626-bc98-701985ff87cf"). InnerVolumeSpecName "kube-api-access-76lm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.510219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32051439-253c-4626-bc98-701985ff87cf" (UID: "32051439-253c-4626-bc98-701985ff87cf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.544619 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76lm9\" (UniqueName: \"kubernetes.io/projected/32051439-253c-4626-bc98-701985ff87cf-kube-api-access-76lm9\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.544651 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32051439-253c-4626-bc98-701985ff87cf-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966733 5136 generic.go:334] "Generic (PLEG): container finished" podID="32051439-253c-4626-bc98-701985ff87cf" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" exitCode=0 Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966798 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec"} Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966849 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mzr5h" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966866 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mzr5h" event={"ID":"32051439-253c-4626-bc98-701985ff87cf","Type":"ContainerDied","Data":"b13ec984c4bc16d74f2087182c5cdff319920af93d300e004f263ca60bc836fb"} Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.966898 5136 scope.go:117] "RemoveContainer" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" Mar 20 07:45:09 crc kubenswrapper[5136]: I0320 07:45:09.997318 5136 scope.go:117] "RemoveContainer" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.024337 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.032285 5136 scope.go:117] "RemoveContainer" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.033634 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mzr5h"] Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.057468 5136 scope.go:117] "RemoveContainer" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" Mar 20 07:45:10 crc kubenswrapper[5136]: E0320 07:45:10.057798 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec\": container with ID starting with 2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec not found: ID does not exist" containerID="2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.057845 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec"} err="failed to get container status \"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec\": rpc error: code = NotFound desc = could not find container \"2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec\": container with ID starting with 2a902ccae9866d500279ccc8948169f1326e0c50ce1f3c533cbbfa6d3a4675ec not found: ID does not exist" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.057865 5136 scope.go:117] "RemoveContainer" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" Mar 20 07:45:10 crc kubenswrapper[5136]: E0320 07:45:10.058040 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20\": container with ID starting with fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20 not found: ID does not exist" containerID="fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.058060 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20"} err="failed to get container status \"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20\": rpc error: code = NotFound desc = could not find container \"fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20\": container with ID starting with fb069e8de83eb079ed737945d738d486fae08bdc5ea74d1c1b8f34268a73bf20 not found: ID does not exist" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.058073 5136 scope.go:117] "RemoveContainer" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" Mar 20 07:45:10 crc kubenswrapper[5136]: E0320 07:45:10.058494 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd\": container with ID starting with 155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd not found: ID does not exist" containerID="155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.058521 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd"} err="failed to get container status \"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd\": rpc error: code = NotFound desc = could not find container \"155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd\": container with ID starting with 155d5a1866f72955aa5da99c9541dacbdf54245c448286a92a53191263c92ddd not found: ID does not exist" Mar 20 07:45:10 crc kubenswrapper[5136]: I0320 07:45:10.455071 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32051439-253c-4626-bc98-701985ff87cf" path="/var/lib/kubelet/pods/32051439-253c-4626-bc98-701985ff87cf/volumes" Mar 20 07:45:14 crc kubenswrapper[5136]: I0320 07:45:14.398161 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:14 crc kubenswrapper[5136]: E0320 07:45:14.399398 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:29 crc kubenswrapper[5136]: I0320 07:45:29.397115 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:29 crc kubenswrapper[5136]: E0320 07:45:29.398030 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:34 crc kubenswrapper[5136]: I0320 07:45:34.578741 5136 scope.go:117] "RemoveContainer" containerID="3ae7890d536278f5580d52b91ca1ce94c8e1b0783ea4d154db2f9c059b03bba9" Mar 20 07:45:40 crc kubenswrapper[5136]: I0320 07:45:40.396165 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:40 crc kubenswrapper[5136]: E0320 07:45:40.396995 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.581221 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582300 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerName="collect-profiles" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582333 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerName="collect-profiles" Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582366 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-content" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582402 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-content" Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582468 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-utilities" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582533 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="extract-utilities" Mar 20 07:45:46 crc kubenswrapper[5136]: E0320 07:45:46.582558 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582574 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582925 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="32051439-253c-4626-bc98-701985ff87cf" containerName="registry-server" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.582980 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" containerName="collect-profiles" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.585304 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.592259 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.697377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.697464 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.697535 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.798658 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.799091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.799256 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.800295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.800410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.835972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"redhat-operators-zqkcz\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:46 crc kubenswrapper[5136]: I0320 07:45:46.922790 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:47 crc kubenswrapper[5136]: I0320 07:45:47.366945 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:45:48 crc kubenswrapper[5136]: I0320 07:45:48.339961 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" exitCode=0 Mar 20 07:45:48 crc kubenswrapper[5136]: I0320 07:45:48.340019 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3"} Mar 20 07:45:48 crc kubenswrapper[5136]: I0320 07:45:48.340296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerStarted","Data":"b8c67125bd359d999fbee971a3189826bd59dfe503f1312a57ccf93e170a140d"} Mar 20 07:45:49 crc kubenswrapper[5136]: I0320 07:45:49.348490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerStarted","Data":"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930"} Mar 20 07:45:50 crc kubenswrapper[5136]: I0320 07:45:50.360629 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" exitCode=0 Mar 20 07:45:50 crc kubenswrapper[5136]: I0320 07:45:50.360712 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930"} Mar 20 07:45:51 crc kubenswrapper[5136]: I0320 07:45:51.397110 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:45:51 crc kubenswrapper[5136]: E0320 07:45:51.398046 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:45:52 crc kubenswrapper[5136]: I0320 07:45:52.375396 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerStarted","Data":"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4"} Mar 20 07:45:52 crc kubenswrapper[5136]: I0320 07:45:52.407901 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zqkcz" podStartSLOduration=3.535086786 podStartE2EDuration="6.407881385s" podCreationTimestamp="2026-03-20 07:45:46 +0000 UTC" firstStartedPulling="2026-03-20 07:45:48.342182201 +0000 UTC m=+3380.601493362" lastFinishedPulling="2026-03-20 07:45:51.21497681 +0000 UTC m=+3383.474287961" observedRunningTime="2026-03-20 07:45:52.399069983 +0000 UTC m=+3384.658381144" watchObservedRunningTime="2026-03-20 07:45:52.407881385 +0000 UTC m=+3384.667192536" Mar 20 07:45:56 crc kubenswrapper[5136]: I0320 07:45:56.923615 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:56 crc kubenswrapper[5136]: I0320 07:45:56.927307 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:45:57 crc kubenswrapper[5136]: I0320 07:45:57.979623 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zqkcz" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" probeResult="failure" output=< Mar 20 07:45:57 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:45:57 crc kubenswrapper[5136]: > Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.166280 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.167298 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.169193 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.170465 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.170468 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.181572 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.299274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"auto-csr-approver-29566546-hbdmv\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.400654 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"auto-csr-approver-29566546-hbdmv\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.427622 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"auto-csr-approver-29566546-hbdmv\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.505111 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:00 crc kubenswrapper[5136]: I0320 07:46:00.942610 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:46:01 crc kubenswrapper[5136]: I0320 07:46:01.464449 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" event={"ID":"d740b018-8653-4631-8138-93e535687c7b","Type":"ContainerStarted","Data":"13314b30b0adc87096e0c3c8ede51e24fbc40cfeed01b2fce0a0e6136dde31b3"} Mar 20 07:46:02 crc kubenswrapper[5136]: I0320 07:46:02.472103 5136 generic.go:334] "Generic (PLEG): container finished" podID="d740b018-8653-4631-8138-93e535687c7b" containerID="9bfd391ee5ff09e988d9f0f680d2e722fd7f235ba526ec5418b765f7a572ee8f" exitCode=0 Mar 20 07:46:02 crc kubenswrapper[5136]: I0320 07:46:02.472155 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" event={"ID":"d740b018-8653-4631-8138-93e535687c7b","Type":"ContainerDied","Data":"9bfd391ee5ff09e988d9f0f680d2e722fd7f235ba526ec5418b765f7a572ee8f"} Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.720044 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.850145 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") pod \"d740b018-8653-4631-8138-93e535687c7b\" (UID: \"d740b018-8653-4631-8138-93e535687c7b\") " Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.855355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2" (OuterVolumeSpecName: "kube-api-access-fkwr2") pod "d740b018-8653-4631-8138-93e535687c7b" (UID: "d740b018-8653-4631-8138-93e535687c7b"). InnerVolumeSpecName "kube-api-access-fkwr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:46:03 crc kubenswrapper[5136]: I0320 07:46:03.951561 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkwr2\" (UniqueName: \"kubernetes.io/projected/d740b018-8653-4631-8138-93e535687c7b-kube-api-access-fkwr2\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.487084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" event={"ID":"d740b018-8653-4631-8138-93e535687c7b","Type":"ContainerDied","Data":"13314b30b0adc87096e0c3c8ede51e24fbc40cfeed01b2fce0a0e6136dde31b3"} Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.487153 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13314b30b0adc87096e0c3c8ede51e24fbc40cfeed01b2fce0a0e6136dde31b3" Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.487110 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566546-hbdmv" Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.808663 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:46:04 crc kubenswrapper[5136]: I0320 07:46:04.816108 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566540-q2mqq"] Mar 20 07:46:05 crc kubenswrapper[5136]: I0320 07:46:05.396901 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:05 crc kubenswrapper[5136]: E0320 07:46:05.397143 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:06 crc kubenswrapper[5136]: I0320 07:46:06.408382 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13ab4686-525c-4931-93d9-5b71ec6644ee" path="/var/lib/kubelet/pods/13ab4686-525c-4931-93d9-5b71ec6644ee/volumes" Mar 20 07:46:06 crc kubenswrapper[5136]: I0320 07:46:06.994538 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:07 crc kubenswrapper[5136]: I0320 07:46:07.067975 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:07 crc kubenswrapper[5136]: I0320 07:46:07.267119 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:46:08 crc kubenswrapper[5136]: I0320 07:46:08.515490 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zqkcz" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" containerID="cri-o://bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" gracePeriod=2 Mar 20 07:46:08 crc kubenswrapper[5136]: I0320 07:46:08.881030 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.023106 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") pod \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.023158 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") pod \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.023222 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") pod \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\" (UID: \"ccc42a65-cfdd-4b03-aecb-404be7591cfb\") " Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.024440 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities" (OuterVolumeSpecName: "utilities") pod "ccc42a65-cfdd-4b03-aecb-404be7591cfb" (UID: "ccc42a65-cfdd-4b03-aecb-404be7591cfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.030870 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs" (OuterVolumeSpecName: "kube-api-access-pwfxs") pod "ccc42a65-cfdd-4b03-aecb-404be7591cfb" (UID: "ccc42a65-cfdd-4b03-aecb-404be7591cfb"). InnerVolumeSpecName "kube-api-access-pwfxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.125328 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.125369 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwfxs\" (UniqueName: \"kubernetes.io/projected/ccc42a65-cfdd-4b03-aecb-404be7591cfb-kube-api-access-pwfxs\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.148849 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ccc42a65-cfdd-4b03-aecb-404be7591cfb" (UID: "ccc42a65-cfdd-4b03-aecb-404be7591cfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.226185 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ccc42a65-cfdd-4b03-aecb-404be7591cfb-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527039 5136 generic.go:334] "Generic (PLEG): container finished" podID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" exitCode=0 Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527080 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4"} Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527109 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zqkcz" event={"ID":"ccc42a65-cfdd-4b03-aecb-404be7591cfb","Type":"ContainerDied","Data":"b8c67125bd359d999fbee971a3189826bd59dfe503f1312a57ccf93e170a140d"} Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527116 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zqkcz" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.527127 5136 scope.go:117] "RemoveContainer" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.548940 5136 scope.go:117] "RemoveContainer" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.582373 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.586027 5136 scope.go:117] "RemoveContainer" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.592937 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zqkcz"] Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.608289 5136 scope.go:117] "RemoveContainer" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" Mar 20 07:46:09 crc kubenswrapper[5136]: E0320 07:46:09.609356 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4\": container with ID starting with bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4 not found: ID does not exist" containerID="bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.609396 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4"} err="failed to get container status \"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4\": rpc error: code = NotFound desc = could not find container \"bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4\": container with ID starting with bd3587dd293b2d0d4bf32d8b26ba6b4a03274c308fda3c951b3a4e835a0531f4 not found: ID does not exist" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.609420 5136 scope.go:117] "RemoveContainer" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" Mar 20 07:46:09 crc kubenswrapper[5136]: E0320 07:46:09.609792 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930\": container with ID starting with 5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930 not found: ID does not exist" containerID="5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.609980 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930"} err="failed to get container status \"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930\": rpc error: code = NotFound desc = could not find container \"5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930\": container with ID starting with 5f40383097fb516d4d7e121c85f50aca3d283b345f7ee750a11cc9b80bac3930 not found: ID does not exist" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.610011 5136 scope.go:117] "RemoveContainer" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" Mar 20 07:46:09 crc kubenswrapper[5136]: E0320 07:46:09.610374 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3\": container with ID starting with 6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3 not found: ID does not exist" containerID="6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3" Mar 20 07:46:09 crc kubenswrapper[5136]: I0320 07:46:09.610424 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3"} err="failed to get container status \"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3\": rpc error: code = NotFound desc = could not find container \"6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3\": container with ID starting with 6906d8409e5858bfb791bed2e0a7754aeaafe822cde05402672ea1f63dce41e3 not found: ID does not exist" Mar 20 07:46:10 crc kubenswrapper[5136]: I0320 07:46:10.405742 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" path="/var/lib/kubelet/pods/ccc42a65-cfdd-4b03-aecb-404be7591cfb/volumes" Mar 20 07:46:17 crc kubenswrapper[5136]: I0320 07:46:17.396403 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:17 crc kubenswrapper[5136]: E0320 07:46:17.397204 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:32 crc kubenswrapper[5136]: I0320 07:46:32.396416 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:32 crc kubenswrapper[5136]: E0320 07:46:32.396982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:34 crc kubenswrapper[5136]: I0320 07:46:34.645615 5136 scope.go:117] "RemoveContainer" containerID="252260f3a58979042bf8b21321cd53a2147f00019a6012d4dfcab45147ceb6a9" Mar 20 07:46:44 crc kubenswrapper[5136]: I0320 07:46:44.396467 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:44 crc kubenswrapper[5136]: E0320 07:46:44.397443 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:46:58 crc kubenswrapper[5136]: I0320 07:46:58.404182 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:46:58 crc kubenswrapper[5136]: E0320 07:46:58.404981 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:13 crc kubenswrapper[5136]: I0320 07:47:13.397475 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:13 crc kubenswrapper[5136]: E0320 07:47:13.398374 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:24 crc kubenswrapper[5136]: I0320 07:47:24.397052 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:24 crc kubenswrapper[5136]: E0320 07:47:24.398004 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:35 crc kubenswrapper[5136]: I0320 07:47:35.397012 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:35 crc kubenswrapper[5136]: E0320 07:47:35.398529 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:47:50 crc kubenswrapper[5136]: I0320 07:47:50.396975 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:47:50 crc kubenswrapper[5136]: E0320 07:47:50.397750 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.167657 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168767 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168790 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168836 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d740b018-8653-4631-8138-93e535687c7b" containerName="oc" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168852 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d740b018-8653-4631-8138-93e535687c7b" containerName="oc" Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168882 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-content" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168896 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-content" Mar 20 07:48:00 crc kubenswrapper[5136]: E0320 07:48:00.168930 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-utilities" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.168944 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="extract-utilities" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.169213 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccc42a65-cfdd-4b03-aecb-404be7591cfb" containerName="registry-server" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.169251 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d740b018-8653-4631-8138-93e535687c7b" containerName="oc" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.172082 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.174742 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.175119 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.175336 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.179107 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.183899 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"auto-csr-approver-29566548-bbghf\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.285069 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"auto-csr-approver-29566548-bbghf\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.307941 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"auto-csr-approver-29566548-bbghf\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.500425 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:00 crc kubenswrapper[5136]: I0320 07:48:00.965845 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:48:01 crc kubenswrapper[5136]: I0320 07:48:01.471991 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566548-bbghf" event={"ID":"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66","Type":"ContainerStarted","Data":"1433de0b16703d4f4d1a60e645bfe424883afa10f19be31feb9b5b1ff0ed4a28"} Mar 20 07:48:02 crc kubenswrapper[5136]: I0320 07:48:02.479874 5136 generic.go:334] "Generic (PLEG): container finished" podID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerID="eda4db7731b82a54ef6f8997e413d44c2ceb0549c49bbb5b7671591ccebd691e" exitCode=0 Mar 20 07:48:02 crc kubenswrapper[5136]: I0320 07:48:02.479941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566548-bbghf" event={"ID":"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66","Type":"ContainerDied","Data":"eda4db7731b82a54ef6f8997e413d44c2ceb0549c49bbb5b7671591ccebd691e"} Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.396521 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:48:03 crc kubenswrapper[5136]: E0320 07:48:03.396752 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.801231 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.841500 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") pod \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\" (UID: \"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66\") " Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.847430 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn" (OuterVolumeSpecName: "kube-api-access-jgmzn") pod "eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" (UID: "eae2b10f-99a8-4ada-a8fb-d674d6e2dc66"). InnerVolumeSpecName "kube-api-access-jgmzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:48:03 crc kubenswrapper[5136]: I0320 07:48:03.943174 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jgmzn\" (UniqueName: \"kubernetes.io/projected/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66-kube-api-access-jgmzn\") on node \"crc\" DevicePath \"\"" Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.496939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566548-bbghf" event={"ID":"eae2b10f-99a8-4ada-a8fb-d674d6e2dc66","Type":"ContainerDied","Data":"1433de0b16703d4f4d1a60e645bfe424883afa10f19be31feb9b5b1ff0ed4a28"} Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.496976 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1433de0b16703d4f4d1a60e645bfe424883afa10f19be31feb9b5b1ff0ed4a28" Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.497019 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566548-bbghf" Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.881154 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:48:04 crc kubenswrapper[5136]: I0320 07:48:04.887804 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566542-pds9m"] Mar 20 07:48:06 crc kubenswrapper[5136]: I0320 07:48:06.413589 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17d864d8-8238-4e66-b9ac-d03d95596254" path="/var/lib/kubelet/pods/17d864d8-8238-4e66-b9ac-d03d95596254/volumes" Mar 20 07:48:16 crc kubenswrapper[5136]: I0320 07:48:16.397888 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:48:17 crc kubenswrapper[5136]: I0320 07:48:17.597588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55"} Mar 20 07:48:34 crc kubenswrapper[5136]: I0320 07:48:34.746838 5136 scope.go:117] "RemoveContainer" containerID="7d44df1c73e9c1d9108526abbe2353b5337e03d920bac4de2652a37d15133fc6" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.171065 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:50:00 crc kubenswrapper[5136]: E0320 07:50:00.172282 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerName="oc" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.172306 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerName="oc" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.172648 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" containerName="oc" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.173443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.176213 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.176773 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.177455 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.182493 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.227556 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"auto-csr-approver-29566550-nxn8z\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.328760 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"auto-csr-approver-29566550-nxn8z\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.352191 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"auto-csr-approver-29566550-nxn8z\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:00 crc kubenswrapper[5136]: I0320 07:50:00.515028 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:01 crc kubenswrapper[5136]: I0320 07:50:00.954705 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:50:01 crc kubenswrapper[5136]: I0320 07:50:00.962740 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:50:01 crc kubenswrapper[5136]: I0320 07:50:01.850696 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" event={"ID":"9e892786-304f-4449-8303-227a30b2af0c","Type":"ContainerStarted","Data":"438cc651f5e5b76703c57ebcad93173aa9662ff0fd2db4df85be5a41bf39a7ef"} Mar 20 07:50:02 crc kubenswrapper[5136]: I0320 07:50:02.861454 5136 generic.go:334] "Generic (PLEG): container finished" podID="9e892786-304f-4449-8303-227a30b2af0c" containerID="fbf51c0f85e48cadf70be318d12d8502f4a21ef24eddd694f9b07eebf9064ae5" exitCode=0 Mar 20 07:50:02 crc kubenswrapper[5136]: I0320 07:50:02.861539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" event={"ID":"9e892786-304f-4449-8303-227a30b2af0c","Type":"ContainerDied","Data":"fbf51c0f85e48cadf70be318d12d8502f4a21ef24eddd694f9b07eebf9064ae5"} Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.171643 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.285333 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") pod \"9e892786-304f-4449-8303-227a30b2af0c\" (UID: \"9e892786-304f-4449-8303-227a30b2af0c\") " Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.291642 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk" (OuterVolumeSpecName: "kube-api-access-wmgpk") pod "9e892786-304f-4449-8303-227a30b2af0c" (UID: "9e892786-304f-4449-8303-227a30b2af0c"). InnerVolumeSpecName "kube-api-access-wmgpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.387578 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmgpk\" (UniqueName: \"kubernetes.io/projected/9e892786-304f-4449-8303-227a30b2af0c-kube-api-access-wmgpk\") on node \"crc\" DevicePath \"\"" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.887298 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" event={"ID":"9e892786-304f-4449-8303-227a30b2af0c","Type":"ContainerDied","Data":"438cc651f5e5b76703c57ebcad93173aa9662ff0fd2db4df85be5a41bf39a7ef"} Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.887362 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="438cc651f5e5b76703c57ebcad93173aa9662ff0fd2db4df85be5a41bf39a7ef" Mar 20 07:50:04 crc kubenswrapper[5136]: I0320 07:50:04.887390 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566550-nxn8z" Mar 20 07:50:05 crc kubenswrapper[5136]: I0320 07:50:05.266074 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:50:05 crc kubenswrapper[5136]: I0320 07:50:05.274138 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566544-2kjwj"] Mar 20 07:50:06 crc kubenswrapper[5136]: I0320 07:50:06.411936 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26c6802e-62e8-47ba-b964-fde9f92ca8ef" path="/var/lib/kubelet/pods/26c6802e-62e8-47ba-b964-fde9f92ca8ef/volumes" Mar 20 07:50:34 crc kubenswrapper[5136]: I0320 07:50:34.831724 5136 scope.go:117] "RemoveContainer" containerID="340e29815927db9adaf364543d249649b4c4d562d5c4326419747f3242c8e07d" Mar 20 07:50:45 crc kubenswrapper[5136]: I0320 07:50:45.822599 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:50:45 crc kubenswrapper[5136]: I0320 07:50:45.823981 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:51:15 crc kubenswrapper[5136]: I0320 07:51:15.822189 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:51:15 crc kubenswrapper[5136]: I0320 07:51:15.822685 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.822477 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.823457 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.823540 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.825257 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:51:45 crc kubenswrapper[5136]: I0320 07:51:45.825330 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55" gracePeriod=600 Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388418 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55" exitCode=0 Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55"} Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388693 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e"} Mar 20 07:51:46 crc kubenswrapper[5136]: I0320 07:51:46.388709 5136 scope.go:117] "RemoveContainer" containerID="5f38a8e663b9107e466893deddc8e7d6246736153b8dba68e4a447b9fa421c17" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.159570 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:52:00 crc kubenswrapper[5136]: E0320 07:52:00.160884 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e892786-304f-4449-8303-227a30b2af0c" containerName="oc" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.160907 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e892786-304f-4449-8303-227a30b2af0c" containerName="oc" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.161117 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e892786-304f-4449-8303-227a30b2af0c" containerName="oc" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.163783 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.167576 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.167906 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.168227 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.171965 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.245889 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"auto-csr-approver-29566552-45w99\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.347524 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"auto-csr-approver-29566552-45w99\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.379255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"auto-csr-approver-29566552-45w99\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.492238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:00 crc kubenswrapper[5136]: I0320 07:52:00.954577 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:52:00 crc kubenswrapper[5136]: W0320 07:52:00.972005 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod680d027e_ec7b_41fa_928c_826f0968c6f2.slice/crio-c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9 WatchSource:0}: Error finding container c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9: Status 404 returned error can't find the container with id c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9 Mar 20 07:52:01 crc kubenswrapper[5136]: I0320 07:52:01.541093 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566552-45w99" event={"ID":"680d027e-ec7b-41fa-928c-826f0968c6f2","Type":"ContainerStarted","Data":"c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9"} Mar 20 07:52:02 crc kubenswrapper[5136]: I0320 07:52:02.554718 5136 generic.go:334] "Generic (PLEG): container finished" podID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerID="850f029af670145399cc93675607b8410dbf5d367cbba9e2397a2a62aff8327a" exitCode=0 Mar 20 07:52:02 crc kubenswrapper[5136]: I0320 07:52:02.554844 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566552-45w99" event={"ID":"680d027e-ec7b-41fa-928c-826f0968c6f2","Type":"ContainerDied","Data":"850f029af670145399cc93675607b8410dbf5d367cbba9e2397a2a62aff8327a"} Mar 20 07:52:03 crc kubenswrapper[5136]: I0320 07:52:03.881767 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.002326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") pod \"680d027e-ec7b-41fa-928c-826f0968c6f2\" (UID: \"680d027e-ec7b-41fa-928c-826f0968c6f2\") " Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.012507 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk" (OuterVolumeSpecName: "kube-api-access-frbpk") pod "680d027e-ec7b-41fa-928c-826f0968c6f2" (UID: "680d027e-ec7b-41fa-928c-826f0968c6f2"). InnerVolumeSpecName "kube-api-access-frbpk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.104612 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frbpk\" (UniqueName: \"kubernetes.io/projected/680d027e-ec7b-41fa-928c-826f0968c6f2-kube-api-access-frbpk\") on node \"crc\" DevicePath \"\"" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.580199 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566552-45w99" event={"ID":"680d027e-ec7b-41fa-928c-826f0968c6f2","Type":"ContainerDied","Data":"c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9"} Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.580292 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c717f439bca7d009c7c1f1ea3a9a8dc4869c462184b5bee11ccec048399262c9" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.580308 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566552-45w99" Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.973488 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:52:04 crc kubenswrapper[5136]: I0320 07:52:04.984665 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566546-hbdmv"] Mar 20 07:52:06 crc kubenswrapper[5136]: I0320 07:52:06.408720 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d740b018-8653-4631-8138-93e535687c7b" path="/var/lib/kubelet/pods/d740b018-8653-4631-8138-93e535687c7b/volumes" Mar 20 07:52:34 crc kubenswrapper[5136]: I0320 07:52:34.930414 5136 scope.go:117] "RemoveContainer" containerID="9bfd391ee5ff09e988d9f0f680d2e722fd7f235ba526ec5418b765f7a572ee8f" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.631117 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:16 crc kubenswrapper[5136]: E0320 07:53:16.632343 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerName="oc" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.632372 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerName="oc" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.632667 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" containerName="oc" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.634342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.646218 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.680951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.681014 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.681081 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782228 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782388 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782768 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.782833 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.813577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"community-operators-dmmv5\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:16 crc kubenswrapper[5136]: I0320 07:53:16.962833 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:17 crc kubenswrapper[5136]: I0320 07:53:17.466642 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:18 crc kubenswrapper[5136]: I0320 07:53:18.174472 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" exitCode=0 Mar 20 07:53:18 crc kubenswrapper[5136]: I0320 07:53:18.174523 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677"} Mar 20 07:53:18 crc kubenswrapper[5136]: I0320 07:53:18.174729 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerStarted","Data":"cffe9a0608630e68ebe445b4b73ca250c67588ca57233ed2a5a8f8aeafc8a8ef"} Mar 20 07:53:22 crc kubenswrapper[5136]: E0320 07:53:22.988332 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7d7add_fc30_4efd_96dc_b253a6fd1b8b.slice/crio-72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd.scope\": RecentStats: unable to find data in memory cache]" Mar 20 07:53:23 crc kubenswrapper[5136]: I0320 07:53:23.212466 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" exitCode=0 Mar 20 07:53:23 crc kubenswrapper[5136]: I0320 07:53:23.212513 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd"} Mar 20 07:53:24 crc kubenswrapper[5136]: I0320 07:53:24.221361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerStarted","Data":"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b"} Mar 20 07:53:24 crc kubenswrapper[5136]: I0320 07:53:24.246469 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dmmv5" podStartSLOduration=2.673528357 podStartE2EDuration="8.246451667s" podCreationTimestamp="2026-03-20 07:53:16 +0000 UTC" firstStartedPulling="2026-03-20 07:53:18.176676216 +0000 UTC m=+3830.435987367" lastFinishedPulling="2026-03-20 07:53:23.749599536 +0000 UTC m=+3836.008910677" observedRunningTime="2026-03-20 07:53:24.24527915 +0000 UTC m=+3836.504590301" watchObservedRunningTime="2026-03-20 07:53:24.246451667 +0000 UTC m=+3836.505762818" Mar 20 07:53:26 crc kubenswrapper[5136]: I0320 07:53:26.963144 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:26 crc kubenswrapper[5136]: I0320 07:53:26.963442 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:27 crc kubenswrapper[5136]: I0320 07:53:27.006437 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.743511 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.745442 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.764263 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.838379 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.838439 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.838461 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.939576 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940021 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940068 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.940805 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:36 crc kubenswrapper[5136]: I0320 07:53:36.964157 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"redhat-marketplace-nnnhv\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:37 crc kubenswrapper[5136]: I0320 07:53:37.023559 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 07:53:37 crc kubenswrapper[5136]: I0320 07:53:37.078560 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:37 crc kubenswrapper[5136]: I0320 07:53:37.503668 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:38 crc kubenswrapper[5136]: I0320 07:53:38.313869 5136 generic.go:334] "Generic (PLEG): container finished" podID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" exitCode=0 Mar 20 07:53:38 crc kubenswrapper[5136]: I0320 07:53:38.313973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c"} Mar 20 07:53:38 crc kubenswrapper[5136]: I0320 07:53:38.314162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerStarted","Data":"b2ec866fc23c1118a0f35606e64cb40fc45b5f6c1417cf766a02b469d9ead578"} Mar 20 07:53:39 crc kubenswrapper[5136]: I0320 07:53:39.321699 5136 generic.go:334] "Generic (PLEG): container finished" podID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" exitCode=0 Mar 20 07:53:39 crc kubenswrapper[5136]: I0320 07:53:39.321735 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe"} Mar 20 07:53:41 crc kubenswrapper[5136]: I0320 07:53:41.340287 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerStarted","Data":"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea"} Mar 20 07:53:41 crc kubenswrapper[5136]: I0320 07:53:41.357450 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nnnhv" podStartSLOduration=3.9463824709999997 podStartE2EDuration="5.357434608s" podCreationTimestamp="2026-03-20 07:53:36 +0000 UTC" firstStartedPulling="2026-03-20 07:53:38.315562773 +0000 UTC m=+3850.574873924" lastFinishedPulling="2026-03-20 07:53:39.72661491 +0000 UTC m=+3851.985926061" observedRunningTime="2026-03-20 07:53:41.356419397 +0000 UTC m=+3853.615730558" watchObservedRunningTime="2026-03-20 07:53:41.357434608 +0000 UTC m=+3853.616745749" Mar 20 07:53:41 crc kubenswrapper[5136]: I0320 07:53:41.962133 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 07:53:42 crc kubenswrapper[5136]: I0320 07:53:42.740451 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 07:53:42 crc kubenswrapper[5136]: I0320 07:53:42.740816 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qfgkr" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" containerID="cri-o://613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" gracePeriod=2 Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.173899 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.229638 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") pod \"e1d2d341-1694-4f55-860a-46b11bac80c8\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.229715 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") pod \"e1d2d341-1694-4f55-860a-46b11bac80c8\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.229790 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") pod \"e1d2d341-1694-4f55-860a-46b11bac80c8\" (UID: \"e1d2d341-1694-4f55-860a-46b11bac80c8\") " Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.230469 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities" (OuterVolumeSpecName: "utilities") pod "e1d2d341-1694-4f55-860a-46b11bac80c8" (UID: "e1d2d341-1694-4f55-860a-46b11bac80c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.245994 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v" (OuterVolumeSpecName: "kube-api-access-r6x6v") pod "e1d2d341-1694-4f55-860a-46b11bac80c8" (UID: "e1d2d341-1694-4f55-860a-46b11bac80c8"). InnerVolumeSpecName "kube-api-access-r6x6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.317479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e1d2d341-1694-4f55-860a-46b11bac80c8" (UID: "e1d2d341-1694-4f55-860a-46b11bac80c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.331202 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.331239 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6x6v\" (UniqueName: \"kubernetes.io/projected/e1d2d341-1694-4f55-860a-46b11bac80c8-kube-api-access-r6x6v\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.331255 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1d2d341-1694-4f55-860a-46b11bac80c8-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355526 5136 generic.go:334] "Generic (PLEG): container finished" podID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" exitCode=0 Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871"} Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355587 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qfgkr" event={"ID":"e1d2d341-1694-4f55-860a-46b11bac80c8","Type":"ContainerDied","Data":"a5974c459c2386be53440ce7c343a53cc05081c5b95afdcf1296d6de5d8a6e98"} Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355603 5136 scope.go:117] "RemoveContainer" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.355702 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qfgkr" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.390466 5136 scope.go:117] "RemoveContainer" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.404859 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.411210 5136 scope.go:117] "RemoveContainer" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.412320 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qfgkr"] Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.433503 5136 scope.go:117] "RemoveContainer" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" Mar 20 07:53:43 crc kubenswrapper[5136]: E0320 07:53:43.435065 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871\": container with ID starting with 613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871 not found: ID does not exist" containerID="613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.435105 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871"} err="failed to get container status \"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871\": rpc error: code = NotFound desc = could not find container \"613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871\": container with ID starting with 613d087bf39841470cd03f88985bdf919817769301d013c32de892e6d64fc871 not found: ID does not exist" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.435136 5136 scope.go:117] "RemoveContainer" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" Mar 20 07:53:43 crc kubenswrapper[5136]: E0320 07:53:43.436303 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697\": container with ID starting with 6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697 not found: ID does not exist" containerID="6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.436344 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697"} err="failed to get container status \"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697\": rpc error: code = NotFound desc = could not find container \"6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697\": container with ID starting with 6aa6be1edd96adead610b068e164a4ecd1bf0b9561c441981a9d8ad159f57697 not found: ID does not exist" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.436370 5136 scope.go:117] "RemoveContainer" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" Mar 20 07:53:43 crc kubenswrapper[5136]: E0320 07:53:43.436673 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c\": container with ID starting with 6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c not found: ID does not exist" containerID="6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c" Mar 20 07:53:43 crc kubenswrapper[5136]: I0320 07:53:43.436694 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c"} err="failed to get container status \"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c\": rpc error: code = NotFound desc = could not find container \"6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c\": container with ID starting with 6c16257981aad8a2f2ba3ee029498da7aa41927021b136d319282cd85c6a814c not found: ID does not exist" Mar 20 07:53:44 crc kubenswrapper[5136]: I0320 07:53:44.404357 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" path="/var/lib/kubelet/pods/e1d2d341-1694-4f55-860a-46b11bac80c8/volumes" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.078738 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.079264 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.139143 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:47 crc kubenswrapper[5136]: I0320 07:53:47.452718 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:49 crc kubenswrapper[5136]: I0320 07:53:49.391707 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:49 crc kubenswrapper[5136]: I0320 07:53:49.407439 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nnnhv" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" containerID="cri-o://5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" gracePeriod=2 Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.337666 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.396750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") pod \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.396960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") pod \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.397006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") pod \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\" (UID: \"89142574-8ae9-43b8-b0d1-9d6f6ede9e56\") " Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.398174 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities" (OuterVolumeSpecName: "utilities") pod "89142574-8ae9-43b8-b0d1-9d6f6ede9e56" (UID: "89142574-8ae9-43b8-b0d1-9d6f6ede9e56"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.405631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5" (OuterVolumeSpecName: "kube-api-access-7fnf5") pod "89142574-8ae9-43b8-b0d1-9d6f6ede9e56" (UID: "89142574-8ae9-43b8-b0d1-9d6f6ede9e56"). InnerVolumeSpecName "kube-api-access-7fnf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.414382 5136 generic.go:334] "Generic (PLEG): container finished" podID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" exitCode=0 Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.414474 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nnnhv" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.436509 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea"} Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.436581 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nnnhv" event={"ID":"89142574-8ae9-43b8-b0d1-9d6f6ede9e56","Type":"ContainerDied","Data":"b2ec866fc23c1118a0f35606e64cb40fc45b5f6c1417cf766a02b469d9ead578"} Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.436625 5136 scope.go:117] "RemoveContainer" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.451426 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89142574-8ae9-43b8-b0d1-9d6f6ede9e56" (UID: "89142574-8ae9-43b8-b0d1-9d6f6ede9e56"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.459909 5136 scope.go:117] "RemoveContainer" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.481242 5136 scope.go:117] "RemoveContainer" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.498943 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.498994 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.499014 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fnf5\" (UniqueName: \"kubernetes.io/projected/89142574-8ae9-43b8-b0d1-9d6f6ede9e56-kube-api-access-7fnf5\") on node \"crc\" DevicePath \"\"" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.509763 5136 scope.go:117] "RemoveContainer" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" Mar 20 07:53:50 crc kubenswrapper[5136]: E0320 07:53:50.510681 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea\": container with ID starting with 5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea not found: ID does not exist" containerID="5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.510753 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea"} err="failed to get container status \"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea\": rpc error: code = NotFound desc = could not find container \"5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea\": container with ID starting with 5117a943701c0e9fd8b0866ceafbc1da72b8292a356c891c1a0051c9e5df9cea not found: ID does not exist" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.510799 5136 scope.go:117] "RemoveContainer" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" Mar 20 07:53:50 crc kubenswrapper[5136]: E0320 07:53:50.511351 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe\": container with ID starting with 6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe not found: ID does not exist" containerID="6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.511396 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe"} err="failed to get container status \"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe\": rpc error: code = NotFound desc = could not find container \"6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe\": container with ID starting with 6c9682d7ed82e39830b078477fbfae2964caa0c11c3dcecb2022b793f4cec1fe not found: ID does not exist" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.511426 5136 scope.go:117] "RemoveContainer" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" Mar 20 07:53:50 crc kubenswrapper[5136]: E0320 07:53:50.511719 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c\": container with ID starting with d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c not found: ID does not exist" containerID="d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.511755 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c"} err="failed to get container status \"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c\": rpc error: code = NotFound desc = could not find container \"d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c\": container with ID starting with d0441dcdca78dbcbcda92c889ca601e6e295851fa3ba79cb28e7d66595c0942c not found: ID does not exist" Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.750713 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:50 crc kubenswrapper[5136]: I0320 07:53:50.756035 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nnnhv"] Mar 20 07:53:52 crc kubenswrapper[5136]: I0320 07:53:52.415995 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" path="/var/lib/kubelet/pods/89142574-8ae9-43b8-b0d1-9d6f6ede9e56/volumes" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.156681 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157642 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157659 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157676 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157686 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157701 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157711 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157724 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157733 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157749 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157756 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-utilities" Mar 20 07:54:00 crc kubenswrapper[5136]: E0320 07:54:00.157777 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157785 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="extract-content" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157964 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1d2d341-1694-4f55-860a-46b11bac80c8" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.157988 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="89142574-8ae9-43b8-b0d1-9d6f6ede9e56" containerName="registry-server" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.158492 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.161456 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.162294 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.162507 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.165341 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.238914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"auto-csr-approver-29566554-rhdhc\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.340867 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"auto-csr-approver-29566554-rhdhc\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.372471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"auto-csr-approver-29566554-rhdhc\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.480297 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:00 crc kubenswrapper[5136]: I0320 07:54:00.978856 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 07:54:01 crc kubenswrapper[5136]: I0320 07:54:01.515775 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerStarted","Data":"d73ed75b4c4128e790ebd50d7f1ab0977dae93e0ef82580373c14c2d1ca8c84f"} Mar 20 07:54:02 crc kubenswrapper[5136]: I0320 07:54:02.526400 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerStarted","Data":"2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d"} Mar 20 07:54:02 crc kubenswrapper[5136]: I0320 07:54:02.541227 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" podStartSLOduration=1.37205477 podStartE2EDuration="2.541208693s" podCreationTimestamp="2026-03-20 07:54:00 +0000 UTC" firstStartedPulling="2026-03-20 07:54:00.976321669 +0000 UTC m=+3873.235632840" lastFinishedPulling="2026-03-20 07:54:02.145475572 +0000 UTC m=+3874.404786763" observedRunningTime="2026-03-20 07:54:02.536499958 +0000 UTC m=+3874.795811129" watchObservedRunningTime="2026-03-20 07:54:02.541208693 +0000 UTC m=+3874.800519844" Mar 20 07:54:03 crc kubenswrapper[5136]: I0320 07:54:03.536763 5136 generic.go:334] "Generic (PLEG): container finished" podID="b14c729c-040e-40a8-90bd-6310cf18d489" containerID="2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d" exitCode=0 Mar 20 07:54:03 crc kubenswrapper[5136]: I0320 07:54:03.536844 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerDied","Data":"2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d"} Mar 20 07:54:04 crc kubenswrapper[5136]: I0320 07:54:04.863737 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.003619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") pod \"b14c729c-040e-40a8-90bd-6310cf18d489\" (UID: \"b14c729c-040e-40a8-90bd-6310cf18d489\") " Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.010273 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j" (OuterVolumeSpecName: "kube-api-access-xhj6j") pod "b14c729c-040e-40a8-90bd-6310cf18d489" (UID: "b14c729c-040e-40a8-90bd-6310cf18d489"). InnerVolumeSpecName "kube-api-access-xhj6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.105571 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhj6j\" (UniqueName: \"kubernetes.io/projected/b14c729c-040e-40a8-90bd-6310cf18d489-kube-api-access-xhj6j\") on node \"crc\" DevicePath \"\"" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.555979 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" event={"ID":"b14c729c-040e-40a8-90bd-6310cf18d489","Type":"ContainerDied","Data":"d73ed75b4c4128e790ebd50d7f1ab0977dae93e0ef82580373c14c2d1ca8c84f"} Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.556021 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d73ed75b4c4128e790ebd50d7f1ab0977dae93e0ef82580373c14c2d1ca8c84f" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.556082 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566554-rhdhc" Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.635904 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:54:05 crc kubenswrapper[5136]: I0320 07:54:05.646453 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566548-bbghf"] Mar 20 07:54:06 crc kubenswrapper[5136]: I0320 07:54:06.413966 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eae2b10f-99a8-4ada-a8fb-d674d6e2dc66" path="/var/lib/kubelet/pods/eae2b10f-99a8-4ada-a8fb-d674d6e2dc66/volumes" Mar 20 07:54:15 crc kubenswrapper[5136]: I0320 07:54:15.821673 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:54:15 crc kubenswrapper[5136]: I0320 07:54:15.822319 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:54:35 crc kubenswrapper[5136]: I0320 07:54:35.082553 5136 scope.go:117] "RemoveContainer" containerID="eda4db7731b82a54ef6f8997e413d44c2ceb0549c49bbb5b7671591ccebd691e" Mar 20 07:54:45 crc kubenswrapper[5136]: I0320 07:54:45.821949 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:54:45 crc kubenswrapper[5136]: I0320 07:54:45.822456 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.821569 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.822876 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.822989 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.824084 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 07:55:15 crc kubenswrapper[5136]: I0320 07:55:15.824580 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" gracePeriod=600 Mar 20 07:55:15 crc kubenswrapper[5136]: E0320 07:55:15.959341 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.445210 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" exitCode=0 Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.445274 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e"} Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.445326 5136 scope.go:117] "RemoveContainer" containerID="722fd4b0219dbc1893de5ddfff6bae8f2f9fc01b73ea47300dc2db9cdf75bb55" Mar 20 07:55:16 crc kubenswrapper[5136]: I0320 07:55:16.446096 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:16 crc kubenswrapper[5136]: E0320 07:55:16.446366 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:29 crc kubenswrapper[5136]: I0320 07:55:29.398043 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:29 crc kubenswrapper[5136]: E0320 07:55:29.399178 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:42 crc kubenswrapper[5136]: I0320 07:55:42.396778 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:42 crc kubenswrapper[5136]: E0320 07:55:42.397528 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:55:54 crc kubenswrapper[5136]: I0320 07:55:54.396602 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:55:54 crc kubenswrapper[5136]: E0320 07:55:54.397452 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.153363 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 07:56:00 crc kubenswrapper[5136]: E0320 07:56:00.154127 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" containerName="oc" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.154147 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" containerName="oc" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.154401 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" containerName="oc" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.155040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.166579 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.201734 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.201810 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.201759 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.202567 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"auto-csr-approver-29566556-9vb6m\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.303795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"auto-csr-approver-29566556-9vb6m\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.329628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"auto-csr-approver-29566556-9vb6m\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.523734 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.951329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 07:56:00 crc kubenswrapper[5136]: I0320 07:56:00.953532 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 07:56:01 crc kubenswrapper[5136]: I0320 07:56:01.844860 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" event={"ID":"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9","Type":"ContainerStarted","Data":"e0e6e27ddb12271dd3a1159302afff5c7ca7ef161d27a66de3658607b795fcc0"} Mar 20 07:56:02 crc kubenswrapper[5136]: I0320 07:56:02.855041 5136 generic.go:334] "Generic (PLEG): container finished" podID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerID="63ba059f75c6d4d3450d3ac5b012caecfb450fe5911e60bdfcbba855ebc6ef49" exitCode=0 Mar 20 07:56:02 crc kubenswrapper[5136]: I0320 07:56:02.855098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" event={"ID":"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9","Type":"ContainerDied","Data":"63ba059f75c6d4d3450d3ac5b012caecfb450fe5911e60bdfcbba855ebc6ef49"} Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.253121 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.358233 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") pod \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\" (UID: \"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9\") " Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.365770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng" (OuterVolumeSpecName: "kube-api-access-4n2ng") pod "a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" (UID: "a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9"). InnerVolumeSpecName "kube-api-access-4n2ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.459726 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n2ng\" (UniqueName: \"kubernetes.io/projected/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9-kube-api-access-4n2ng\") on node \"crc\" DevicePath \"\"" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.879486 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" event={"ID":"a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9","Type":"ContainerDied","Data":"e0e6e27ddb12271dd3a1159302afff5c7ca7ef161d27a66de3658607b795fcc0"} Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.879533 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0e6e27ddb12271dd3a1159302afff5c7ca7ef161d27a66de3658607b795fcc0" Mar 20 07:56:04 crc kubenswrapper[5136]: I0320 07:56:04.879564 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566556-9vb6m" Mar 20 07:56:05 crc kubenswrapper[5136]: I0320 07:56:05.320183 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:56:05 crc kubenswrapper[5136]: I0320 07:56:05.328797 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566550-nxn8z"] Mar 20 07:56:06 crc kubenswrapper[5136]: I0320 07:56:06.407003 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e892786-304f-4449-8303-227a30b2af0c" path="/var/lib/kubelet/pods/9e892786-304f-4449-8303-227a30b2af0c/volumes" Mar 20 07:56:07 crc kubenswrapper[5136]: I0320 07:56:07.397412 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:07 crc kubenswrapper[5136]: E0320 07:56:07.397853 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:21 crc kubenswrapper[5136]: I0320 07:56:21.397102 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:21 crc kubenswrapper[5136]: E0320 07:56:21.397881 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:34 crc kubenswrapper[5136]: I0320 07:56:34.396681 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:34 crc kubenswrapper[5136]: E0320 07:56:34.399727 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:35 crc kubenswrapper[5136]: I0320 07:56:35.193481 5136 scope.go:117] "RemoveContainer" containerID="fbf51c0f85e48cadf70be318d12d8502f4a21ef24eddd694f9b07eebf9064ae5" Mar 20 07:56:48 crc kubenswrapper[5136]: I0320 07:56:48.401443 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:48 crc kubenswrapper[5136]: E0320 07:56:48.402385 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:56:59 crc kubenswrapper[5136]: I0320 07:56:59.396769 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:56:59 crc kubenswrapper[5136]: E0320 07:56:59.397390 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.163150 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:01 crc kubenswrapper[5136]: E0320 07:57:01.163641 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerName="oc" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.163662 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerName="oc" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.163913 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" containerName="oc" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.165464 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.180679 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.258829 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.258880 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.258916 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.359942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360568 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.360985 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.383566 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"redhat-operators-k9mh2\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.486139 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:01 crc kubenswrapper[5136]: I0320 07:57:01.913231 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:02 crc kubenswrapper[5136]: I0320 07:57:02.343901 5136 generic.go:334] "Generic (PLEG): container finished" podID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" exitCode=0 Mar 20 07:57:02 crc kubenswrapper[5136]: I0320 07:57:02.343970 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb"} Mar 20 07:57:02 crc kubenswrapper[5136]: I0320 07:57:02.344015 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerStarted","Data":"babf061ccf17144d55f5b246b2717171d77d7c7349e6bdae8fdd7bb7671665b9"} Mar 20 07:57:03 crc kubenswrapper[5136]: I0320 07:57:03.361653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerStarted","Data":"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe"} Mar 20 07:57:04 crc kubenswrapper[5136]: I0320 07:57:04.369673 5136 generic.go:334] "Generic (PLEG): container finished" podID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" exitCode=0 Mar 20 07:57:04 crc kubenswrapper[5136]: I0320 07:57:04.369725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe"} Mar 20 07:57:05 crc kubenswrapper[5136]: I0320 07:57:05.378239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerStarted","Data":"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3"} Mar 20 07:57:05 crc kubenswrapper[5136]: I0320 07:57:05.397672 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k9mh2" podStartSLOduration=1.893868595 podStartE2EDuration="4.397653119s" podCreationTimestamp="2026-03-20 07:57:01 +0000 UTC" firstStartedPulling="2026-03-20 07:57:02.345575428 +0000 UTC m=+4054.604886579" lastFinishedPulling="2026-03-20 07:57:04.849359952 +0000 UTC m=+4057.108671103" observedRunningTime="2026-03-20 07:57:05.396184634 +0000 UTC m=+4057.655495795" watchObservedRunningTime="2026-03-20 07:57:05.397653119 +0000 UTC m=+4057.656964270" Mar 20 07:57:11 crc kubenswrapper[5136]: I0320 07:57:11.486575 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:11 crc kubenswrapper[5136]: I0320 07:57:11.487073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:12 crc kubenswrapper[5136]: I0320 07:57:12.396738 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:12 crc kubenswrapper[5136]: E0320 07:57:12.397195 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:12 crc kubenswrapper[5136]: I0320 07:57:12.540664 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k9mh2" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" probeResult="failure" output=< Mar 20 07:57:12 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 07:57:12 crc kubenswrapper[5136]: > Mar 20 07:57:21 crc kubenswrapper[5136]: I0320 07:57:21.544380 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:21 crc kubenswrapper[5136]: I0320 07:57:21.636806 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:21 crc kubenswrapper[5136]: I0320 07:57:21.795117 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:23 crc kubenswrapper[5136]: I0320 07:57:23.397184 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:23 crc kubenswrapper[5136]: E0320 07:57:23.398643 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:23 crc kubenswrapper[5136]: I0320 07:57:23.557040 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k9mh2" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" containerID="cri-o://a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" gracePeriod=2 Mar 20 07:57:23 crc kubenswrapper[5136]: I0320 07:57:23.960223 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.100521 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") pod \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.101029 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") pod \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.101097 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") pod \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\" (UID: \"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec\") " Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.101580 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities" (OuterVolumeSpecName: "utilities") pod "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" (UID: "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.106299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9" (OuterVolumeSpecName: "kube-api-access-ngdh9") pod "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" (UID: "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec"). InnerVolumeSpecName "kube-api-access-ngdh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.202508 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.202538 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngdh9\" (UniqueName: \"kubernetes.io/projected/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-kube-api-access-ngdh9\") on node \"crc\" DevicePath \"\"" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.252564 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" (UID: "baeeebe9-6018-4b91-8b5d-a94e3b6e8cec"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.303772 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563632 5136 generic.go:334] "Generic (PLEG): container finished" podID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" exitCode=0 Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3"} Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563684 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k9mh2" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563701 5136 scope.go:117] "RemoveContainer" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.563691 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k9mh2" event={"ID":"baeeebe9-6018-4b91-8b5d-a94e3b6e8cec","Type":"ContainerDied","Data":"babf061ccf17144d55f5b246b2717171d77d7c7349e6bdae8fdd7bb7671665b9"} Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.588244 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.594502 5136 scope.go:117] "RemoveContainer" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.595331 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k9mh2"] Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.628409 5136 scope.go:117] "RemoveContainer" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.645870 5136 scope.go:117] "RemoveContainer" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" Mar 20 07:57:24 crc kubenswrapper[5136]: E0320 07:57:24.646325 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3\": container with ID starting with a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3 not found: ID does not exist" containerID="a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646358 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3"} err="failed to get container status \"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3\": rpc error: code = NotFound desc = could not find container \"a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3\": container with ID starting with a100a445c5930c668abfe386184b0582de007629076620c9c6a676afe9aeb3d3 not found: ID does not exist" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646386 5136 scope.go:117] "RemoveContainer" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" Mar 20 07:57:24 crc kubenswrapper[5136]: E0320 07:57:24.646645 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe\": container with ID starting with 2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe not found: ID does not exist" containerID="2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646671 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe"} err="failed to get container status \"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe\": rpc error: code = NotFound desc = could not find container \"2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe\": container with ID starting with 2c0ae4b61a8bf59c6aba839058cc8ca90c0e5cf521186d012fe522ac3ae7b9fe not found: ID does not exist" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646689 5136 scope.go:117] "RemoveContainer" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" Mar 20 07:57:24 crc kubenswrapper[5136]: E0320 07:57:24.646923 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb\": container with ID starting with 255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb not found: ID does not exist" containerID="255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb" Mar 20 07:57:24 crc kubenswrapper[5136]: I0320 07:57:24.646950 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb"} err="failed to get container status \"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb\": rpc error: code = NotFound desc = could not find container \"255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb\": container with ID starting with 255db5f84bafe47d60d92f1f89264e14e0bff8d4d2fe29b9b5570626608637bb not found: ID does not exist" Mar 20 07:57:26 crc kubenswrapper[5136]: I0320 07:57:26.405347 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" path="/var/lib/kubelet/pods/baeeebe9-6018-4b91-8b5d-a94e3b6e8cec/volumes" Mar 20 07:57:34 crc kubenswrapper[5136]: I0320 07:57:34.397572 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:34 crc kubenswrapper[5136]: E0320 07:57:34.398466 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:57:49 crc kubenswrapper[5136]: I0320 07:57:49.396898 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:57:49 crc kubenswrapper[5136]: E0320 07:57:49.397564 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.161178 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 07:58:00 crc kubenswrapper[5136]: E0320 07:58:00.162469 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-utilities" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162490 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-utilities" Mar 20 07:58:00 crc kubenswrapper[5136]: E0320 07:58:00.162514 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-content" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162523 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="extract-content" Mar 20 07:58:00 crc kubenswrapper[5136]: E0320 07:58:00.162557 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162565 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.162763 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="baeeebe9-6018-4b91-8b5d-a94e3b6e8cec" containerName="registry-server" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.163496 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.167475 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.168079 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.168142 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.174469 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.278462 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"auto-csr-approver-29566558-28xg4\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.379601 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"auto-csr-approver-29566558-28xg4\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.410925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"auto-csr-approver-29566558-28xg4\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.499668 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.727863 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 07:58:00 crc kubenswrapper[5136]: I0320 07:58:00.831889 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566558-28xg4" event={"ID":"86a36a1a-3cb0-4827-94dc-d0f12aaf385f","Type":"ContainerStarted","Data":"efd0c500f15a2b8eca0de3b8c0c4864bdbb59bca070f0d8489e35dbb0b9291fa"} Mar 20 07:58:02 crc kubenswrapper[5136]: I0320 07:58:02.396773 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:02 crc kubenswrapper[5136]: E0320 07:58:02.397178 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:02 crc kubenswrapper[5136]: I0320 07:58:02.848551 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerID="10b2eeb474ede75a881a1e488088999be16ed59aa15136e1ec1d51ce1d945aec" exitCode=0 Mar 20 07:58:02 crc kubenswrapper[5136]: I0320 07:58:02.848668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566558-28xg4" event={"ID":"86a36a1a-3cb0-4827-94dc-d0f12aaf385f","Type":"ContainerDied","Data":"10b2eeb474ede75a881a1e488088999be16ed59aa15136e1ec1d51ce1d945aec"} Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.261442 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.445242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") pod \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\" (UID: \"86a36a1a-3cb0-4827-94dc-d0f12aaf385f\") " Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.452151 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q" (OuterVolumeSpecName: "kube-api-access-5466q") pod "86a36a1a-3cb0-4827-94dc-d0f12aaf385f" (UID: "86a36a1a-3cb0-4827-94dc-d0f12aaf385f"). InnerVolumeSpecName "kube-api-access-5466q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.546996 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5466q\" (UniqueName: \"kubernetes.io/projected/86a36a1a-3cb0-4827-94dc-d0f12aaf385f-kube-api-access-5466q\") on node \"crc\" DevicePath \"\"" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.867404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566558-28xg4" event={"ID":"86a36a1a-3cb0-4827-94dc-d0f12aaf385f","Type":"ContainerDied","Data":"efd0c500f15a2b8eca0de3b8c0c4864bdbb59bca070f0d8489e35dbb0b9291fa"} Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.867463 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="efd0c500f15a2b8eca0de3b8c0c4864bdbb59bca070f0d8489e35dbb0b9291fa" Mar 20 07:58:04 crc kubenswrapper[5136]: I0320 07:58:04.867470 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566558-28xg4" Mar 20 07:58:05 crc kubenswrapper[5136]: I0320 07:58:05.338182 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:58:05 crc kubenswrapper[5136]: I0320 07:58:05.343050 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566552-45w99"] Mar 20 07:58:06 crc kubenswrapper[5136]: I0320 07:58:06.410445 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="680d027e-ec7b-41fa-928c-826f0968c6f2" path="/var/lib/kubelet/pods/680d027e-ec7b-41fa-928c-826f0968c6f2/volumes" Mar 20 07:58:14 crc kubenswrapper[5136]: I0320 07:58:14.397072 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:14 crc kubenswrapper[5136]: E0320 07:58:14.398039 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:29 crc kubenswrapper[5136]: I0320 07:58:29.397108 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:29 crc kubenswrapper[5136]: E0320 07:58:29.397922 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:35 crc kubenswrapper[5136]: I0320 07:58:35.302273 5136 scope.go:117] "RemoveContainer" containerID="850f029af670145399cc93675607b8410dbf5d367cbba9e2397a2a62aff8327a" Mar 20 07:58:43 crc kubenswrapper[5136]: I0320 07:58:43.396387 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:43 crc kubenswrapper[5136]: E0320 07:58:43.397295 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:58:57 crc kubenswrapper[5136]: I0320 07:58:57.397518 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:58:57 crc kubenswrapper[5136]: E0320 07:58:57.398638 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:08 crc kubenswrapper[5136]: I0320 07:59:08.396930 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:08 crc kubenswrapper[5136]: E0320 07:59:08.398051 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:22 crc kubenswrapper[5136]: I0320 07:59:22.396963 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:22 crc kubenswrapper[5136]: E0320 07:59:22.398061 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:36 crc kubenswrapper[5136]: I0320 07:59:36.396308 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:36 crc kubenswrapper[5136]: E0320 07:59:36.397007 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 07:59:51 crc kubenswrapper[5136]: I0320 07:59:51.397090 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 07:59:51 crc kubenswrapper[5136]: E0320 07:59:51.398095 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.146766 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:00:00 crc kubenswrapper[5136]: E0320 08:00:00.148939 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerName="oc" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.149073 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerName="oc" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.149340 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" containerName="oc" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.150040 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.152051 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.152451 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.153481 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.173491 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.174614 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.176190 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.176392 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.184556 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.188495 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.253937 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"auto-csr-approver-29566560-p6kgg\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.355662 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.355731 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"auto-csr-approver-29566560-p6kgg\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.355951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.356051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.377676 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"auto-csr-approver-29566560-p6kgg\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.457462 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.457531 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.457588 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.458803 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.462364 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.474040 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"collect-profiles-29566560-8gmtl\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.514045 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.522023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.918128 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:00:00 crc kubenswrapper[5136]: I0320 08:00:00.972031 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.835074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" event={"ID":"dafdbb11-e22c-4545-8678-7757ef7e8605","Type":"ContainerStarted","Data":"835dcd63591d6d0b44066723750deb0b5ca26b1e8f791f70756132ab60105cdb"} Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.837398 5136 generic.go:334] "Generic (PLEG): container finished" podID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerID="a9399ede282cd1d4b161abddeaa1193070be8003a67d2c8907749c2c5dadab78" exitCode=0 Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.837503 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" event={"ID":"90ad33e9-cb6b-450c-9703-8d6e379f3075","Type":"ContainerDied","Data":"a9399ede282cd1d4b161abddeaa1193070be8003a67d2c8907749c2c5dadab78"} Mar 20 08:00:01 crc kubenswrapper[5136]: I0320 08:00:01.837589 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" event={"ID":"90ad33e9-cb6b-450c-9703-8d6e379f3075","Type":"ContainerStarted","Data":"f8e824a7bd5fbe483a1d56f307bdb3aaf7cb4a10c107a47ddbdc568bcad73fa5"} Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.138355 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.303842 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") pod \"90ad33e9-cb6b-450c-9703-8d6e379f3075\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.303887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") pod \"90ad33e9-cb6b-450c-9703-8d6e379f3075\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.303916 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") pod \"90ad33e9-cb6b-450c-9703-8d6e379f3075\" (UID: \"90ad33e9-cb6b-450c-9703-8d6e379f3075\") " Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.304717 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume" (OuterVolumeSpecName: "config-volume") pod "90ad33e9-cb6b-450c-9703-8d6e379f3075" (UID: "90ad33e9-cb6b-450c-9703-8d6e379f3075"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.309474 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "90ad33e9-cb6b-450c-9703-8d6e379f3075" (UID: "90ad33e9-cb6b-450c-9703-8d6e379f3075"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.311304 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9" (OuterVolumeSpecName: "kube-api-access-xwsk9") pod "90ad33e9-cb6b-450c-9703-8d6e379f3075" (UID: "90ad33e9-cb6b-450c-9703-8d6e379f3075"). InnerVolumeSpecName "kube-api-access-xwsk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.396693 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:00:03 crc kubenswrapper[5136]: E0320 08:00:03.396921 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.406431 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/90ad33e9-cb6b-450c-9703-8d6e379f3075-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.406475 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwsk9\" (UniqueName: \"kubernetes.io/projected/90ad33e9-cb6b-450c-9703-8d6e379f3075-kube-api-access-xwsk9\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.406487 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90ad33e9-cb6b-450c-9703-8d6e379f3075-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.853463 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" event={"ID":"90ad33e9-cb6b-450c-9703-8d6e379f3075","Type":"ContainerDied","Data":"f8e824a7bd5fbe483a1d56f307bdb3aaf7cb4a10c107a47ddbdc568bcad73fa5"} Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.853510 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8e824a7bd5fbe483a1d56f307bdb3aaf7cb4a10c107a47ddbdc568bcad73fa5" Mar 20 08:00:03 crc kubenswrapper[5136]: I0320 08:00:03.853541 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl" Mar 20 08:00:04 crc kubenswrapper[5136]: I0320 08:00:04.224426 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 08:00:04 crc kubenswrapper[5136]: I0320 08:00:04.231249 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566515-7hn9q"] Mar 20 08:00:04 crc kubenswrapper[5136]: I0320 08:00:04.407534 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f40568b-2bbc-4d1e-b089-6e08e1eede4b" path="/var/lib/kubelet/pods/6f40568b-2bbc-4d1e-b089-6e08e1eede4b/volumes" Mar 20 08:00:08 crc kubenswrapper[5136]: I0320 08:00:08.887711 5136 generic.go:334] "Generic (PLEG): container finished" podID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerID="12a5c143763826b6ba302aa9399c3eae56ceb49f8dbb078183073cdf280ba6a4" exitCode=0 Mar 20 08:00:08 crc kubenswrapper[5136]: I0320 08:00:08.887771 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" event={"ID":"dafdbb11-e22c-4545-8678-7757ef7e8605","Type":"ContainerDied","Data":"12a5c143763826b6ba302aa9399c3eae56ceb49f8dbb078183073cdf280ba6a4"} Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.176305 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.299666 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") pod \"dafdbb11-e22c-4545-8678-7757ef7e8605\" (UID: \"dafdbb11-e22c-4545-8678-7757ef7e8605\") " Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.305761 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs" (OuterVolumeSpecName: "kube-api-access-rp4fs") pod "dafdbb11-e22c-4545-8678-7757ef7e8605" (UID: "dafdbb11-e22c-4545-8678-7757ef7e8605"). InnerVolumeSpecName "kube-api-access-rp4fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.402084 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp4fs\" (UniqueName: \"kubernetes.io/projected/dafdbb11-e22c-4545-8678-7757ef7e8605-kube-api-access-rp4fs\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.908579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" event={"ID":"dafdbb11-e22c-4545-8678-7757ef7e8605","Type":"ContainerDied","Data":"835dcd63591d6d0b44066723750deb0b5ca26b1e8f791f70756132ab60105cdb"} Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.908636 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="835dcd63591d6d0b44066723750deb0b5ca26b1e8f791f70756132ab60105cdb" Mar 20 08:00:10 crc kubenswrapper[5136]: I0320 08:00:10.908792 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566560-p6kgg" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.046517 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:11 crc kubenswrapper[5136]: E0320 08:00:11.046961 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerName="oc" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.046983 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerName="oc" Mar 20 08:00:11 crc kubenswrapper[5136]: E0320 08:00:11.049112 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerName="collect-profiles" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.049144 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerName="collect-profiles" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.049422 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" containerName="collect-profiles" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.049450 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" containerName="oc" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.050695 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.059989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.212242 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.212318 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.212380 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.238955 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.244199 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566554-rhdhc"] Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.313624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.313693 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.313741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.314284 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.314336 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.330703 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"certified-operators-p9x88\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:11 crc kubenswrapper[5136]: I0320 08:00:11.371450 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.141540 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.404740 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b14c729c-040e-40a8-90bd-6310cf18d489" path="/var/lib/kubelet/pods/b14c729c-040e-40a8-90bd-6310cf18d489/volumes" Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.925460 5136 generic.go:334] "Generic (PLEG): container finished" podID="bb833208-918b-487d-925f-73b87fca3d3e" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" exitCode=0 Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.925525 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd"} Mar 20 08:00:12 crc kubenswrapper[5136]: I0320 08:00:12.925565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerStarted","Data":"bc724bad1e1e588917febe48ff97f1945a7640eb711953cf54a34630b4b5196b"} Mar 20 08:00:13 crc kubenswrapper[5136]: I0320 08:00:13.933203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerStarted","Data":"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0"} Mar 20 08:00:14 crc kubenswrapper[5136]: I0320 08:00:14.942223 5136 generic.go:334] "Generic (PLEG): container finished" podID="bb833208-918b-487d-925f-73b87fca3d3e" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" exitCode=0 Mar 20 08:00:14 crc kubenswrapper[5136]: I0320 08:00:14.942311 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0"} Mar 20 08:00:15 crc kubenswrapper[5136]: I0320 08:00:15.396626 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:00:15 crc kubenswrapper[5136]: E0320 08:00:15.397148 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:00:15 crc kubenswrapper[5136]: I0320 08:00:15.952399 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerStarted","Data":"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312"} Mar 20 08:00:15 crc kubenswrapper[5136]: I0320 08:00:15.981641 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p9x88" podStartSLOduration=2.586404253 podStartE2EDuration="4.981618637s" podCreationTimestamp="2026-03-20 08:00:11 +0000 UTC" firstStartedPulling="2026-03-20 08:00:12.927373154 +0000 UTC m=+4245.186684305" lastFinishedPulling="2026-03-20 08:00:15.322587538 +0000 UTC m=+4247.581898689" observedRunningTime="2026-03-20 08:00:15.974425075 +0000 UTC m=+4248.233736216" watchObservedRunningTime="2026-03-20 08:00:15.981618637 +0000 UTC m=+4248.240929808" Mar 20 08:00:21 crc kubenswrapper[5136]: I0320 08:00:21.372689 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:21 crc kubenswrapper[5136]: I0320 08:00:21.373437 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:21 crc kubenswrapper[5136]: I0320 08:00:21.445215 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:22 crc kubenswrapper[5136]: I0320 08:00:22.078145 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:22 crc kubenswrapper[5136]: I0320 08:00:22.436100 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.024982 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p9x88" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" containerID="cri-o://c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" gracePeriod=2 Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.494576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.606489 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") pod \"bb833208-918b-487d-925f-73b87fca3d3e\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.606552 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") pod \"bb833208-918b-487d-925f-73b87fca3d3e\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.606697 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") pod \"bb833208-918b-487d-925f-73b87fca3d3e\" (UID: \"bb833208-918b-487d-925f-73b87fca3d3e\") " Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.608282 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities" (OuterVolumeSpecName: "utilities") pod "bb833208-918b-487d-925f-73b87fca3d3e" (UID: "bb833208-918b-487d-925f-73b87fca3d3e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.612140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w" (OuterVolumeSpecName: "kube-api-access-m4h8w") pod "bb833208-918b-487d-925f-73b87fca3d3e" (UID: "bb833208-918b-487d-925f-73b87fca3d3e"). InnerVolumeSpecName "kube-api-access-m4h8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.692297 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bb833208-918b-487d-925f-73b87fca3d3e" (UID: "bb833208-918b-487d-925f-73b87fca3d3e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.707936 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.707970 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4h8w\" (UniqueName: \"kubernetes.io/projected/bb833208-918b-487d-925f-73b87fca3d3e-kube-api-access-m4h8w\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:24 crc kubenswrapper[5136]: I0320 08:00:24.707981 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bb833208-918b-487d-925f-73b87fca3d3e-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.041334 5136 generic.go:334] "Generic (PLEG): container finished" podID="bb833208-918b-487d-925f-73b87fca3d3e" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" exitCode=0 Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.041434 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312"} Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.043601 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p9x88" event={"ID":"bb833208-918b-487d-925f-73b87fca3d3e","Type":"ContainerDied","Data":"bc724bad1e1e588917febe48ff97f1945a7640eb711953cf54a34630b4b5196b"} Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.043647 5136 scope.go:117] "RemoveContainer" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.041557 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p9x88" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.088213 5136 scope.go:117] "RemoveContainer" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.088802 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.099141 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p9x88"] Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.121647 5136 scope.go:117] "RemoveContainer" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.154229 5136 scope.go:117] "RemoveContainer" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" Mar 20 08:00:25 crc kubenswrapper[5136]: E0320 08:00:25.154902 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312\": container with ID starting with c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312 not found: ID does not exist" containerID="c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.154951 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312"} err="failed to get container status \"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312\": rpc error: code = NotFound desc = could not find container \"c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312\": container with ID starting with c576f8fb2b805a9e783735e7e64d718754594a1371c305ff23521222dc580312 not found: ID does not exist" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.154976 5136 scope.go:117] "RemoveContainer" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" Mar 20 08:00:25 crc kubenswrapper[5136]: E0320 08:00:25.155408 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0\": container with ID starting with 9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0 not found: ID does not exist" containerID="9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.155588 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0"} err="failed to get container status \"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0\": rpc error: code = NotFound desc = could not find container \"9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0\": container with ID starting with 9ed5747f15e4b3d95017f7afe35ded2154bbb52e7751d4fe5bb6ce65b32452c0 not found: ID does not exist" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.155773 5136 scope.go:117] "RemoveContainer" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" Mar 20 08:00:25 crc kubenswrapper[5136]: E0320 08:00:25.156320 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd\": container with ID starting with 33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd not found: ID does not exist" containerID="33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd" Mar 20 08:00:25 crc kubenswrapper[5136]: I0320 08:00:25.156377 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd"} err="failed to get container status \"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd\": rpc error: code = NotFound desc = could not find container \"33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd\": container with ID starting with 33e3b05e16201694865b412cb91484dbae7bafcab623f5cfe6792844fdb038dd not found: ID does not exist" Mar 20 08:00:26 crc kubenswrapper[5136]: I0320 08:00:26.409807 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb833208-918b-487d-925f-73b87fca3d3e" path="/var/lib/kubelet/pods/bb833208-918b-487d-925f-73b87fca3d3e/volumes" Mar 20 08:00:27 crc kubenswrapper[5136]: I0320 08:00:27.397626 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:00:28 crc kubenswrapper[5136]: I0320 08:00:28.075884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71"} Mar 20 08:00:35 crc kubenswrapper[5136]: I0320 08:00:35.408959 5136 scope.go:117] "RemoveContainer" containerID="2bdd42fe67ed84f38314c250e3cb1b39e3c5ab87ea5a8e75695c9bc28f0ae60d" Mar 20 08:00:35 crc kubenswrapper[5136]: I0320 08:00:35.460349 5136 scope.go:117] "RemoveContainer" containerID="9f24a13849a44546b978a1e086eb14881e8d529298f6ffe2023d8ef7f1bdc4c6" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.166863 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:02:00 crc kubenswrapper[5136]: E0320 08:02:00.170948 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-utilities" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.170987 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-utilities" Mar 20 08:02:00 crc kubenswrapper[5136]: E0320 08:02:00.171046 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.171067 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" Mar 20 08:02:00 crc kubenswrapper[5136]: E0320 08:02:00.171121 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-content" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.171140 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="extract-content" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.171481 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb833208-918b-487d-925f-73b87fca3d3e" containerName="registry-server" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.172510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.174727 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.175413 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.175516 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.178311 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.333665 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"auto-csr-approver-29566562-r69l2\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.435643 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"auto-csr-approver-29566562-r69l2\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.469207 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"auto-csr-approver-29566562-r69l2\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.496691 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.943140 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:02:00 crc kubenswrapper[5136]: W0320 08:02:00.952006 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf60a5f62_51f3_48fa_b718_e55da57c2647.slice/crio-45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78 WatchSource:0}: Error finding container 45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78: Status 404 returned error can't find the container with id 45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78 Mar 20 08:02:00 crc kubenswrapper[5136]: I0320 08:02:00.956323 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:02:01 crc kubenswrapper[5136]: I0320 08:02:01.921665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566562-r69l2" event={"ID":"f60a5f62-51f3-48fa-b718-e55da57c2647","Type":"ContainerStarted","Data":"45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78"} Mar 20 08:02:02 crc kubenswrapper[5136]: I0320 08:02:02.929341 5136 generic.go:334] "Generic (PLEG): container finished" podID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerID="58b402a854cc55b74a5a39d5f73121fda2fdd8d8cefb4c57e5aa94f8a9a79d4e" exitCode=0 Mar 20 08:02:02 crc kubenswrapper[5136]: I0320 08:02:02.929425 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566562-r69l2" event={"ID":"f60a5f62-51f3-48fa-b718-e55da57c2647","Type":"ContainerDied","Data":"58b402a854cc55b74a5a39d5f73121fda2fdd8d8cefb4c57e5aa94f8a9a79d4e"} Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.285251 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.401944 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") pod \"f60a5f62-51f3-48fa-b718-e55da57c2647\" (UID: \"f60a5f62-51f3-48fa-b718-e55da57c2647\") " Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.412223 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6" (OuterVolumeSpecName: "kube-api-access-d4pz6") pod "f60a5f62-51f3-48fa-b718-e55da57c2647" (UID: "f60a5f62-51f3-48fa-b718-e55da57c2647"). InnerVolumeSpecName "kube-api-access-d4pz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.505711 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4pz6\" (UniqueName: \"kubernetes.io/projected/f60a5f62-51f3-48fa-b718-e55da57c2647-kube-api-access-d4pz6\") on node \"crc\" DevicePath \"\"" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.952836 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566562-r69l2" event={"ID":"f60a5f62-51f3-48fa-b718-e55da57c2647","Type":"ContainerDied","Data":"45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78"} Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.952873 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45b55944ae835cd02e21453db0cacefd735d38d31fe445287855cb2699111d78" Mar 20 08:02:04 crc kubenswrapper[5136]: I0320 08:02:04.952946 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566562-r69l2" Mar 20 08:02:05 crc kubenswrapper[5136]: I0320 08:02:05.364108 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 08:02:05 crc kubenswrapper[5136]: I0320 08:02:05.372257 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566556-9vb6m"] Mar 20 08:02:06 crc kubenswrapper[5136]: I0320 08:02:06.413493 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9" path="/var/lib/kubelet/pods/a0a1c5dc-4d81-4a5b-a3fa-bdf0cb15e4f9/volumes" Mar 20 08:02:35 crc kubenswrapper[5136]: I0320 08:02:35.559455 5136 scope.go:117] "RemoveContainer" containerID="63ba059f75c6d4d3450d3ac5b012caecfb450fe5911e60bdfcbba855ebc6ef49" Mar 20 08:02:45 crc kubenswrapper[5136]: I0320 08:02:45.822230 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:02:45 crc kubenswrapper[5136]: I0320 08:02:45.822942 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:03:15 crc kubenswrapper[5136]: I0320 08:03:15.822619 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:03:15 crc kubenswrapper[5136]: I0320 08:03:15.823551 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.822395 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.823187 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.823260 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.824349 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:03:45 crc kubenswrapper[5136]: I0320 08:03:45.824451 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71" gracePeriod=600 Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826385 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71" exitCode=0 Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826459 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71"} Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826969 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82"} Mar 20 08:03:46 crc kubenswrapper[5136]: I0320 08:03:46.826990 5136 scope.go:117] "RemoveContainer" containerID="2d716f5ca95f63a629b8c5eba728e6f767a9d7a380a6adb207558787dee59c3e" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.923674 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:03:50 crc kubenswrapper[5136]: E0320 08:03:50.924504 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerName="oc" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.924519 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerName="oc" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.924749 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" containerName="oc" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.925976 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:50 crc kubenswrapper[5136]: I0320 08:03:50.951629 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.009919 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.009959 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.010089 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111086 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.111762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.141677 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"community-operators-9x9kk\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.244793 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.522945 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.866884 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" exitCode=0 Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.866936 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04"} Mar 20 08:03:51 crc kubenswrapper[5136]: I0320 08:03:51.867154 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerStarted","Data":"6216dc20188501420e067c5dab2af6a0f3f6dc51d1d86c18f9beca6c561c1c90"} Mar 20 08:03:53 crc kubenswrapper[5136]: I0320 08:03:53.884824 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" exitCode=0 Mar 20 08:03:53 crc kubenswrapper[5136]: I0320 08:03:53.884893 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9"} Mar 20 08:03:54 crc kubenswrapper[5136]: I0320 08:03:54.894738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerStarted","Data":"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1"} Mar 20 08:03:54 crc kubenswrapper[5136]: I0320 08:03:54.920844 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9x9kk" podStartSLOduration=2.5020369159999998 podStartE2EDuration="4.920806018s" podCreationTimestamp="2026-03-20 08:03:50 +0000 UTC" firstStartedPulling="2026-03-20 08:03:51.868259518 +0000 UTC m=+4464.127570699" lastFinishedPulling="2026-03-20 08:03:54.28702862 +0000 UTC m=+4466.546339801" observedRunningTime="2026-03-20 08:03:54.912752439 +0000 UTC m=+4467.172063610" watchObservedRunningTime="2026-03-20 08:03:54.920806018 +0000 UTC m=+4467.180117169" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.040542 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.043692 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.059553 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.105419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.105559 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.105697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.206974 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.207059 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.207146 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.208631 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.208776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.232745 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"redhat-marketplace-vf4gp\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.380746 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.841767 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:03:57 crc kubenswrapper[5136]: I0320 08:03:57.916086 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerStarted","Data":"410338ce57cd4886948cba2241cb257aebf7cd2afe0af4e84e8032565e4f8b0a"} Mar 20 08:03:58 crc kubenswrapper[5136]: I0320 08:03:58.927539 5136 generic.go:334] "Generic (PLEG): container finished" podID="7312d03e-31ae-4c8a-95f6-23325b107124" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" exitCode=0 Mar 20 08:03:58 crc kubenswrapper[5136]: I0320 08:03:58.927612 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837"} Mar 20 08:03:59 crc kubenswrapper[5136]: I0320 08:03:59.939804 5136 generic.go:334] "Generic (PLEG): container finished" podID="7312d03e-31ae-4c8a-95f6-23325b107124" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" exitCode=0 Mar 20 08:03:59 crc kubenswrapper[5136]: I0320 08:03:59.939938 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787"} Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.149259 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.151159 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.156748 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.156936 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.156987 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.157770 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.256038 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"auto-csr-approver-29566564-wps8c\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.357441 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"auto-csr-approver-29566564-wps8c\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.388262 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"auto-csr-approver-29566564-wps8c\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.486875 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.949372 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerStarted","Data":"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578"} Mar 20 08:04:00 crc kubenswrapper[5136]: I0320 08:04:00.978694 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vf4gp" podStartSLOduration=2.519842918 podStartE2EDuration="3.978675761s" podCreationTimestamp="2026-03-20 08:03:57 +0000 UTC" firstStartedPulling="2026-03-20 08:03:58.930234462 +0000 UTC m=+4471.189545653" lastFinishedPulling="2026-03-20 08:04:00.389067305 +0000 UTC m=+4472.648378496" observedRunningTime="2026-03-20 08:04:00.970670793 +0000 UTC m=+4473.229981954" watchObservedRunningTime="2026-03-20 08:04:00.978675761 +0000 UTC m=+4473.237986912" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.007935 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.245168 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.245227 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.298899 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:01 crc kubenswrapper[5136]: I0320 08:04:01.957520 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566564-wps8c" event={"ID":"dd0441d6-4822-4c0e-b72c-b33d59e4a81b","Type":"ContainerStarted","Data":"3a00424e61e7f40fc003f3dd7a302d720672be345dbaa81b236aed4563cf3e2d"} Mar 20 08:04:02 crc kubenswrapper[5136]: I0320 08:04:02.119080 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:02 crc kubenswrapper[5136]: I0320 08:04:02.967752 5136 generic.go:334] "Generic (PLEG): container finished" podID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerID="a8db32019a3eb6483d295a328515205a1810920d8bfa5e500df3dffc05d44642" exitCode=0 Mar 20 08:04:02 crc kubenswrapper[5136]: I0320 08:04:02.967891 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566564-wps8c" event={"ID":"dd0441d6-4822-4c0e-b72c-b33d59e4a81b","Type":"ContainerDied","Data":"a8db32019a3eb6483d295a328515205a1810920d8bfa5e500df3dffc05d44642"} Mar 20 08:04:03 crc kubenswrapper[5136]: I0320 08:04:03.453380 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:04:03 crc kubenswrapper[5136]: I0320 08:04:03.990427 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9x9kk" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" containerID="cri-o://a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" gracePeriod=2 Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.312067 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.429579 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.440253 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") pod \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\" (UID: \"dd0441d6-4822-4c0e-b72c-b33d59e4a81b\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.487647 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t" (OuterVolumeSpecName: "kube-api-access-hb46t") pod "dd0441d6-4822-4c0e-b72c-b33d59e4a81b" (UID: "dd0441d6-4822-4c0e-b72c-b33d59e4a81b"). InnerVolumeSpecName "kube-api-access-hb46t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541465 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") pod \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541544 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") pod \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") pod \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\" (UID: \"c7cd59f8-3dfb-45b7-884b-eb0a7670011c\") " Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.541928 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hb46t\" (UniqueName: \"kubernetes.io/projected/dd0441d6-4822-4c0e-b72c-b33d59e4a81b-kube-api-access-hb46t\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.542910 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities" (OuterVolumeSpecName: "utilities") pod "c7cd59f8-3dfb-45b7-884b-eb0a7670011c" (UID: "c7cd59f8-3dfb-45b7-884b-eb0a7670011c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.545920 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945" (OuterVolumeSpecName: "kube-api-access-dc945") pod "c7cd59f8-3dfb-45b7-884b-eb0a7670011c" (UID: "c7cd59f8-3dfb-45b7-884b-eb0a7670011c"). InnerVolumeSpecName "kube-api-access-dc945". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.598161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7cd59f8-3dfb-45b7-884b-eb0a7670011c" (UID: "c7cd59f8-3dfb-45b7-884b-eb0a7670011c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.642873 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc945\" (UniqueName: \"kubernetes.io/projected/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-kube-api-access-dc945\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.642918 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.642931 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7cd59f8-3dfb-45b7-884b-eb0a7670011c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.999071 5136 generic.go:334] "Generic (PLEG): container finished" podID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" exitCode=0 Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.999125 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9x9kk" Mar 20 08:04:04 crc kubenswrapper[5136]: I0320 08:04:04.999139 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1"} Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:04.999193 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9x9kk" event={"ID":"c7cd59f8-3dfb-45b7-884b-eb0a7670011c","Type":"ContainerDied","Data":"6216dc20188501420e067c5dab2af6a0f3f6dc51d1d86c18f9beca6c561c1c90"} Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:04.999221 5136 scope.go:117] "RemoveContainer" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.003116 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566564-wps8c" event={"ID":"dd0441d6-4822-4c0e-b72c-b33d59e4a81b","Type":"ContainerDied","Data":"3a00424e61e7f40fc003f3dd7a302d720672be345dbaa81b236aed4563cf3e2d"} Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.003157 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a00424e61e7f40fc003f3dd7a302d720672be345dbaa81b236aed4563cf3e2d" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.003170 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566564-wps8c" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.022711 5136 scope.go:117] "RemoveContainer" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.048847 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.057611 5136 scope.go:117] "RemoveContainer" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.059723 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9x9kk"] Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.085743 5136 scope.go:117] "RemoveContainer" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" Mar 20 08:04:05 crc kubenswrapper[5136]: E0320 08:04:05.086247 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1\": container with ID starting with a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1 not found: ID does not exist" containerID="a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086279 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1"} err="failed to get container status \"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1\": rpc error: code = NotFound desc = could not find container \"a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1\": container with ID starting with a71c4d92ee8295add98c8af5e44a282bbb882f1a53f8bbedc3b6b91128e98ee1 not found: ID does not exist" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086298 5136 scope.go:117] "RemoveContainer" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" Mar 20 08:04:05 crc kubenswrapper[5136]: E0320 08:04:05.086851 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9\": container with ID starting with 72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9 not found: ID does not exist" containerID="72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086910 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9"} err="failed to get container status \"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9\": rpc error: code = NotFound desc = could not find container \"72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9\": container with ID starting with 72edf18f51e728a3b79d9223dff780bfb2f79ef0aba838ee4dcb50ad40f645d9 not found: ID does not exist" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.086949 5136 scope.go:117] "RemoveContainer" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" Mar 20 08:04:05 crc kubenswrapper[5136]: E0320 08:04:05.087241 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04\": container with ID starting with 99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04 not found: ID does not exist" containerID="99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.087312 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04"} err="failed to get container status \"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04\": rpc error: code = NotFound desc = could not find container \"99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04\": container with ID starting with 99c97a942ea9cf1301c4c8a6a841bdc5757faffdc4fa5859ba64cb1a66d1be04 not found: ID does not exist" Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.386517 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 08:04:05 crc kubenswrapper[5136]: I0320 08:04:05.391782 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566558-28xg4"] Mar 20 08:04:06 crc kubenswrapper[5136]: I0320 08:04:06.414761 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a36a1a-3cb0-4827-94dc-d0f12aaf385f" path="/var/lib/kubelet/pods/86a36a1a-3cb0-4827-94dc-d0f12aaf385f/volumes" Mar 20 08:04:06 crc kubenswrapper[5136]: I0320 08:04:06.415834 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" path="/var/lib/kubelet/pods/c7cd59f8-3dfb-45b7-884b-eb0a7670011c/volumes" Mar 20 08:04:07 crc kubenswrapper[5136]: I0320 08:04:07.381544 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:07 crc kubenswrapper[5136]: I0320 08:04:07.381956 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:07 crc kubenswrapper[5136]: I0320 08:04:07.428491 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:08 crc kubenswrapper[5136]: I0320 08:04:08.080770 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:08 crc kubenswrapper[5136]: I0320 08:04:08.854750 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.035912 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vf4gp" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" containerID="cri-o://03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" gracePeriod=2 Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.444538 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.520901 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") pod \"7312d03e-31ae-4c8a-95f6-23325b107124\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.521050 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") pod \"7312d03e-31ae-4c8a-95f6-23325b107124\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.521156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") pod \"7312d03e-31ae-4c8a-95f6-23325b107124\" (UID: \"7312d03e-31ae-4c8a-95f6-23325b107124\") " Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.522497 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities" (OuterVolumeSpecName: "utilities") pod "7312d03e-31ae-4c8a-95f6-23325b107124" (UID: "7312d03e-31ae-4c8a-95f6-23325b107124"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.530952 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt" (OuterVolumeSpecName: "kube-api-access-9qdjt") pod "7312d03e-31ae-4c8a-95f6-23325b107124" (UID: "7312d03e-31ae-4c8a-95f6-23325b107124"). InnerVolumeSpecName "kube-api-access-9qdjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.557773 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7312d03e-31ae-4c8a-95f6-23325b107124" (UID: "7312d03e-31ae-4c8a-95f6-23325b107124"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.623152 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.623186 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7312d03e-31ae-4c8a-95f6-23325b107124-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:10 crc kubenswrapper[5136]: I0320 08:04:10.623197 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qdjt\" (UniqueName: \"kubernetes.io/projected/7312d03e-31ae-4c8a-95f6-23325b107124-kube-api-access-9qdjt\") on node \"crc\" DevicePath \"\"" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051230 5136 generic.go:334] "Generic (PLEG): container finished" podID="7312d03e-31ae-4c8a-95f6-23325b107124" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" exitCode=0 Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578"} Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051313 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vf4gp" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051346 5136 scope.go:117] "RemoveContainer" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.051332 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vf4gp" event={"ID":"7312d03e-31ae-4c8a-95f6-23325b107124","Type":"ContainerDied","Data":"410338ce57cd4886948cba2241cb257aebf7cd2afe0af4e84e8032565e4f8b0a"} Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.086219 5136 scope.go:117] "RemoveContainer" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.109005 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.117197 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vf4gp"] Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.132798 5136 scope.go:117] "RemoveContainer" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.169482 5136 scope.go:117] "RemoveContainer" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" Mar 20 08:04:11 crc kubenswrapper[5136]: E0320 08:04:11.170042 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578\": container with ID starting with 03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578 not found: ID does not exist" containerID="03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170086 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578"} err="failed to get container status \"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578\": rpc error: code = NotFound desc = could not find container \"03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578\": container with ID starting with 03b0c1769d68977e9de32a9a3cdbca73bef1f14fbe272a3e4c4fe5a5d7d72578 not found: ID does not exist" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170133 5136 scope.go:117] "RemoveContainer" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" Mar 20 08:04:11 crc kubenswrapper[5136]: E0320 08:04:11.170572 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787\": container with ID starting with 761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787 not found: ID does not exist" containerID="761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170610 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787"} err="failed to get container status \"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787\": rpc error: code = NotFound desc = could not find container \"761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787\": container with ID starting with 761109f0c38a7ab24fca79b9e228f7017b4e64f03b974a6898dc7958e7d7a787 not found: ID does not exist" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.170624 5136 scope.go:117] "RemoveContainer" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" Mar 20 08:04:11 crc kubenswrapper[5136]: E0320 08:04:11.170952 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837\": container with ID starting with a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837 not found: ID does not exist" containerID="a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837" Mar 20 08:04:11 crc kubenswrapper[5136]: I0320 08:04:11.171007 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837"} err="failed to get container status \"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837\": rpc error: code = NotFound desc = could not find container \"a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837\": container with ID starting with a3363b606e2f5613c8f8d29a0b1c9efe9df8d551a9b389199ac4fb0ae37a4837 not found: ID does not exist" Mar 20 08:04:12 crc kubenswrapper[5136]: I0320 08:04:12.411252 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" path="/var/lib/kubelet/pods/7312d03e-31ae-4c8a-95f6-23325b107124/volumes" Mar 20 08:04:35 crc kubenswrapper[5136]: I0320 08:04:35.766076 5136 scope.go:117] "RemoveContainer" containerID="10b2eeb474ede75a881a1e488088999be16ed59aa15136e1ec1d51ce1d945aec" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.180415 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181713 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181745 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181778 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181796 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181881 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerName="oc" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181902 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerName="oc" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181923 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181941 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-utilities" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.181972 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.181988 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.182031 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182047 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="extract-content" Mar 20 08:06:00 crc kubenswrapper[5136]: E0320 08:06:00.182073 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182092 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182457 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7312d03e-31ae-4c8a-95f6-23325b107124" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182494 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" containerName="oc" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.182527 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7cd59f8-3dfb-45b7-884b-eb0a7670011c" containerName="registry-server" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.183572 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.191554 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.192288 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.192667 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.224695 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.251529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"auto-csr-approver-29566566-tj6mv\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.353323 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"auto-csr-approver-29566566-tj6mv\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.381230 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"auto-csr-approver-29566566-tj6mv\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.536797 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.959152 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:06:00 crc kubenswrapper[5136]: I0320 08:06:00.984013 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" event={"ID":"1171863e-bf58-4961-a881-403e291cc93a","Type":"ContainerStarted","Data":"bba6e3b4e06639d22811ed7b4a4ba11c4a1f53e2c440a6933e344c68966fb961"} Mar 20 08:06:03 crc kubenswrapper[5136]: I0320 08:06:03.000931 5136 generic.go:334] "Generic (PLEG): container finished" podID="1171863e-bf58-4961-a881-403e291cc93a" containerID="b1deef80cd1c3d1582469b2cb38e1b1a394ed2e7b6171fb2539451da0bf3a162" exitCode=0 Mar 20 08:06:03 crc kubenswrapper[5136]: I0320 08:06:03.000991 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" event={"ID":"1171863e-bf58-4961-a881-403e291cc93a","Type":"ContainerDied","Data":"b1deef80cd1c3d1582469b2cb38e1b1a394ed2e7b6171fb2539451da0bf3a162"} Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.347212 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.409161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") pod \"1171863e-bf58-4961-a881-403e291cc93a\" (UID: \"1171863e-bf58-4961-a881-403e291cc93a\") " Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.415408 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj" (OuterVolumeSpecName: "kube-api-access-fdbbj") pod "1171863e-bf58-4961-a881-403e291cc93a" (UID: "1171863e-bf58-4961-a881-403e291cc93a"). InnerVolumeSpecName "kube-api-access-fdbbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:06:04 crc kubenswrapper[5136]: I0320 08:06:04.513782 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbbj\" (UniqueName: \"kubernetes.io/projected/1171863e-bf58-4961-a881-403e291cc93a-kube-api-access-fdbbj\") on node \"crc\" DevicePath \"\"" Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.029113 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" event={"ID":"1171863e-bf58-4961-a881-403e291cc93a","Type":"ContainerDied","Data":"bba6e3b4e06639d22811ed7b4a4ba11c4a1f53e2c440a6933e344c68966fb961"} Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.029165 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bba6e3b4e06639d22811ed7b4a4ba11c4a1f53e2c440a6933e344c68966fb961" Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.029214 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566566-tj6mv" Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.432915 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:06:05 crc kubenswrapper[5136]: I0320 08:06:05.437405 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566560-p6kgg"] Mar 20 08:06:06 crc kubenswrapper[5136]: I0320 08:06:06.407162 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dafdbb11-e22c-4545-8678-7757ef7e8605" path="/var/lib/kubelet/pods/dafdbb11-e22c-4545-8678-7757ef7e8605/volumes" Mar 20 08:06:15 crc kubenswrapper[5136]: I0320 08:06:15.822598 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:06:15 crc kubenswrapper[5136]: I0320 08:06:15.823096 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:06:35 crc kubenswrapper[5136]: I0320 08:06:35.905537 5136 scope.go:117] "RemoveContainer" containerID="12a5c143763826b6ba302aa9399c3eae56ceb49f8dbb078183073cdf280ba6a4" Mar 20 08:06:45 crc kubenswrapper[5136]: I0320 08:06:45.822680 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:06:45 crc kubenswrapper[5136]: I0320 08:06:45.823287 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.822477 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.823149 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.823207 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.823990 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:07:15 crc kubenswrapper[5136]: I0320 08:07:15.824071 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" gracePeriod=600 Mar 20 08:07:15 crc kubenswrapper[5136]: E0320 08:07:15.951555 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.605268 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" exitCode=0 Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.605336 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82"} Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.605387 5136 scope.go:117] "RemoveContainer" containerID="5ac6f79e4add2f58b46aeafaf79304def26e975edac0dc7ec4b6cf1f00afef71" Mar 20 08:07:16 crc kubenswrapper[5136]: I0320 08:07:16.606210 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:16 crc kubenswrapper[5136]: E0320 08:07:16.606846 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:30 crc kubenswrapper[5136]: I0320 08:07:30.397322 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:30 crc kubenswrapper[5136]: E0320 08:07:30.398384 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:43 crc kubenswrapper[5136]: I0320 08:07:43.397160 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:43 crc kubenswrapper[5136]: E0320 08:07:43.398164 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.305218 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:07:51 crc kubenswrapper[5136]: E0320 08:07:51.306370 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1171863e-bf58-4961-a881-403e291cc93a" containerName="oc" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.306394 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1171863e-bf58-4961-a881-403e291cc93a" containerName="oc" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.306660 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1171863e-bf58-4961-a881-403e291cc93a" containerName="oc" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.309696 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.322899 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.402593 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.402650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.402712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.504240 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.504293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.504358 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.505091 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.505116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.529449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"redhat-operators-4hwz2\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:51 crc kubenswrapper[5136]: I0320 08:07:51.633501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.070723 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.895559 5136 generic.go:334] "Generic (PLEG): container finished" podID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" exitCode=0 Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.895706 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5"} Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.895868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerStarted","Data":"81a9f974105c9ce0fad94de5e00134979d625fe19fad5a54c8ea767113a113e2"} Mar 20 08:07:52 crc kubenswrapper[5136]: I0320 08:07:52.897342 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:07:53 crc kubenswrapper[5136]: I0320 08:07:53.912245 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerStarted","Data":"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e"} Mar 20 08:07:54 crc kubenswrapper[5136]: I0320 08:07:54.397717 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:07:54 crc kubenswrapper[5136]: E0320 08:07:54.398469 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:07:54 crc kubenswrapper[5136]: I0320 08:07:54.924495 5136 generic.go:334] "Generic (PLEG): container finished" podID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" exitCode=0 Mar 20 08:07:54 crc kubenswrapper[5136]: I0320 08:07:54.924566 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e"} Mar 20 08:07:55 crc kubenswrapper[5136]: I0320 08:07:55.936786 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerStarted","Data":"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906"} Mar 20 08:07:55 crc kubenswrapper[5136]: I0320 08:07:55.966503 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4hwz2" podStartSLOduration=2.523671052 podStartE2EDuration="4.966477956s" podCreationTimestamp="2026-03-20 08:07:51 +0000 UTC" firstStartedPulling="2026-03-20 08:07:52.897084986 +0000 UTC m=+4705.156396137" lastFinishedPulling="2026-03-20 08:07:55.33989188 +0000 UTC m=+4707.599203041" observedRunningTime="2026-03-20 08:07:55.959321395 +0000 UTC m=+4708.218632546" watchObservedRunningTime="2026-03-20 08:07:55.966477956 +0000 UTC m=+4708.225789127" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.142151 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.143491 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.146735 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.146998 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.147214 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.162443 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.335112 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"auto-csr-approver-29566568-t2xwb\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.436503 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"auto-csr-approver-29566568-t2xwb\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.469134 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"auto-csr-approver-29566568-t2xwb\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:00 crc kubenswrapper[5136]: I0320 08:08:00.763965 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.015126 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.634234 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.634302 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:01 crc kubenswrapper[5136]: I0320 08:08:01.989114 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" event={"ID":"b174d612-6f70-49f1-a024-93c2a9bd0824","Type":"ContainerStarted","Data":"6841d8670809f2ed51d755167f8c9ddff5654a631dfbe640e35c1fbc6a14d522"} Mar 20 08:08:02 crc kubenswrapper[5136]: I0320 08:08:02.686226 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4hwz2" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" probeResult="failure" output=< Mar 20 08:08:02 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:08:02 crc kubenswrapper[5136]: > Mar 20 08:08:03 crc kubenswrapper[5136]: I0320 08:08:03.001976 5136 generic.go:334] "Generic (PLEG): container finished" podID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerID="cfc59d82836f1e5aa8be6bb29641caa9e94e4841e523822550b31308b0957aae" exitCode=0 Mar 20 08:08:03 crc kubenswrapper[5136]: I0320 08:08:03.002081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" event={"ID":"b174d612-6f70-49f1-a024-93c2a9bd0824","Type":"ContainerDied","Data":"cfc59d82836f1e5aa8be6bb29641caa9e94e4841e523822550b31308b0957aae"} Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.326983 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.495260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") pod \"b174d612-6f70-49f1-a024-93c2a9bd0824\" (UID: \"b174d612-6f70-49f1-a024-93c2a9bd0824\") " Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.501449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc" (OuterVolumeSpecName: "kube-api-access-7lvhc") pod "b174d612-6f70-49f1-a024-93c2a9bd0824" (UID: "b174d612-6f70-49f1-a024-93c2a9bd0824"). InnerVolumeSpecName "kube-api-access-7lvhc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:08:04 crc kubenswrapper[5136]: I0320 08:08:04.597673 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lvhc\" (UniqueName: \"kubernetes.io/projected/b174d612-6f70-49f1-a024-93c2a9bd0824-kube-api-access-7lvhc\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.020138 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" event={"ID":"b174d612-6f70-49f1-a024-93c2a9bd0824","Type":"ContainerDied","Data":"6841d8670809f2ed51d755167f8c9ddff5654a631dfbe640e35c1fbc6a14d522"} Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.020572 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6841d8670809f2ed51d755167f8c9ddff5654a631dfbe640e35c1fbc6a14d522" Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.020175 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566568-t2xwb" Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.408401 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:08:05 crc kubenswrapper[5136]: I0320 08:08:05.417763 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566562-r69l2"] Mar 20 08:08:06 crc kubenswrapper[5136]: I0320 08:08:06.411172 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f60a5f62-51f3-48fa-b718-e55da57c2647" path="/var/lib/kubelet/pods/f60a5f62-51f3-48fa-b718-e55da57c2647/volumes" Mar 20 08:08:09 crc kubenswrapper[5136]: I0320 08:08:09.397234 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:09 crc kubenswrapper[5136]: E0320 08:08:09.398139 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:08:11 crc kubenswrapper[5136]: I0320 08:08:11.698526 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:11 crc kubenswrapper[5136]: I0320 08:08:11.748544 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:11 crc kubenswrapper[5136]: I0320 08:08:11.953402 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.075672 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4hwz2" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" containerID="cri-o://d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" gracePeriod=2 Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.752712 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.941764 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") pod \"35c277d6-a1a4-484a-bf8a-cb58210afedd\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.941947 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") pod \"35c277d6-a1a4-484a-bf8a-cb58210afedd\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.941984 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") pod \"35c277d6-a1a4-484a-bf8a-cb58210afedd\" (UID: \"35c277d6-a1a4-484a-bf8a-cb58210afedd\") " Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.944431 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities" (OuterVolumeSpecName: "utilities") pod "35c277d6-a1a4-484a-bf8a-cb58210afedd" (UID: "35c277d6-a1a4-484a-bf8a-cb58210afedd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:08:13 crc kubenswrapper[5136]: I0320 08:08:13.956198 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm" (OuterVolumeSpecName: "kube-api-access-fjfjm") pod "35c277d6-a1a4-484a-bf8a-cb58210afedd" (UID: "35c277d6-a1a4-484a-bf8a-cb58210afedd"). InnerVolumeSpecName "kube-api-access-fjfjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.044110 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjfjm\" (UniqueName: \"kubernetes.io/projected/35c277d6-a1a4-484a-bf8a-cb58210afedd-kube-api-access-fjfjm\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.044151 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086503 5136 generic.go:334] "Generic (PLEG): container finished" podID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" exitCode=0 Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086561 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4hwz2" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906"} Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4hwz2" event={"ID":"35c277d6-a1a4-484a-bf8a-cb58210afedd","Type":"ContainerDied","Data":"81a9f974105c9ce0fad94de5e00134979d625fe19fad5a54c8ea767113a113e2"} Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.086694 5136 scope.go:117] "RemoveContainer" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.111660 5136 scope.go:117] "RemoveContainer" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.138690 5136 scope.go:117] "RemoveContainer" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.142857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "35c277d6-a1a4-484a-bf8a-cb58210afedd" (UID: "35c277d6-a1a4-484a-bf8a-cb58210afedd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.145337 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/35c277d6-a1a4-484a-bf8a-cb58210afedd-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.164775 5136 scope.go:117] "RemoveContainer" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" Mar 20 08:08:14 crc kubenswrapper[5136]: E0320 08:08:14.165237 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906\": container with ID starting with d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906 not found: ID does not exist" containerID="d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165269 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906"} err="failed to get container status \"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906\": rpc error: code = NotFound desc = could not find container \"d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906\": container with ID starting with d6f5d847f74784183e439999223e867f4c86e5729a27c84b84985a22ad781906 not found: ID does not exist" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165291 5136 scope.go:117] "RemoveContainer" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" Mar 20 08:08:14 crc kubenswrapper[5136]: E0320 08:08:14.165548 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e\": container with ID starting with efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e not found: ID does not exist" containerID="efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165570 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e"} err="failed to get container status \"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e\": rpc error: code = NotFound desc = could not find container \"efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e\": container with ID starting with efe5f42b95f06ead1876a723806758f60f07a5cfd58bb541248d1216fc5a608e not found: ID does not exist" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165584 5136 scope.go:117] "RemoveContainer" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" Mar 20 08:08:14 crc kubenswrapper[5136]: E0320 08:08:14.165921 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5\": container with ID starting with 6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5 not found: ID does not exist" containerID="6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.165985 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5"} err="failed to get container status \"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5\": rpc error: code = NotFound desc = could not find container \"6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5\": container with ID starting with 6de912e3d14dade4eaa8e15869a4fc295a324df503206b3982fb65b3bacb4cc5 not found: ID does not exist" Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.435410 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:08:14 crc kubenswrapper[5136]: I0320 08:08:14.445719 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4hwz2"] Mar 20 08:08:16 crc kubenswrapper[5136]: I0320 08:08:16.409535 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" path="/var/lib/kubelet/pods/35c277d6-a1a4-484a-bf8a-cb58210afedd/volumes" Mar 20 08:08:24 crc kubenswrapper[5136]: I0320 08:08:24.396208 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:24 crc kubenswrapper[5136]: E0320 08:08:24.396767 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:08:36 crc kubenswrapper[5136]: I0320 08:08:36.004453 5136 scope.go:117] "RemoveContainer" containerID="58b402a854cc55b74a5a39d5f73121fda2fdd8d8cefb4c57e5aa94f8a9a79d4e" Mar 20 08:08:39 crc kubenswrapper[5136]: I0320 08:08:39.396625 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:39 crc kubenswrapper[5136]: E0320 08:08:39.397495 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:08:53 crc kubenswrapper[5136]: I0320 08:08:53.396243 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:08:53 crc kubenswrapper[5136]: E0320 08:08:53.396982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:06 crc kubenswrapper[5136]: I0320 08:09:06.397634 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:06 crc kubenswrapper[5136]: E0320 08:09:06.398876 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:17 crc kubenswrapper[5136]: I0320 08:09:17.396313 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:17 crc kubenswrapper[5136]: E0320 08:09:17.397210 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:31 crc kubenswrapper[5136]: I0320 08:09:31.396623 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:31 crc kubenswrapper[5136]: E0320 08:09:31.397360 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:45 crc kubenswrapper[5136]: I0320 08:09:45.397618 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:45 crc kubenswrapper[5136]: E0320 08:09:45.398654 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:09:56 crc kubenswrapper[5136]: I0320 08:09:56.397310 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:09:56 crc kubenswrapper[5136]: E0320 08:09:56.398416 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.199647 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201216 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-content" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201243 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-content" Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201260 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerName="oc" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201273 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerName="oc" Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201294 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201308 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" Mar 20 08:10:00 crc kubenswrapper[5136]: E0320 08:10:00.201347 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-utilities" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.201360 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="extract-utilities" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.204057 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" containerName="oc" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.204107 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="35c277d6-a1a4-484a-bf8a-cb58210afedd" containerName="registry-server" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.204754 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.207476 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.209374 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.209763 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.230465 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.255554 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"auto-csr-approver-29566570-hgkrr\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.356860 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"auto-csr-approver-29566570-hgkrr\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.389666 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"auto-csr-approver-29566570-hgkrr\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:00 crc kubenswrapper[5136]: I0320 08:10:00.533156 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:01 crc kubenswrapper[5136]: I0320 08:10:01.034779 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:10:01 crc kubenswrapper[5136]: I0320 08:10:01.976418 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" event={"ID":"04302f0d-411c-49b0-8682-e64bb02c697d","Type":"ContainerStarted","Data":"f96bfa114c776de7a58b86b2866b3939409e0f7882cc08dd6b4d8966fd6a33dd"} Mar 20 08:10:02 crc kubenswrapper[5136]: I0320 08:10:02.985474 5136 generic.go:334] "Generic (PLEG): container finished" podID="04302f0d-411c-49b0-8682-e64bb02c697d" containerID="79d8ba8cc4cde24163fe3c378a9767a3b425536e543bf53f50b55aaf7f5ba019" exitCode=0 Mar 20 08:10:02 crc kubenswrapper[5136]: I0320 08:10:02.985539 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" event={"ID":"04302f0d-411c-49b0-8682-e64bb02c697d","Type":"ContainerDied","Data":"79d8ba8cc4cde24163fe3c378a9767a3b425536e543bf53f50b55aaf7f5ba019"} Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.328777 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.411710 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") pod \"04302f0d-411c-49b0-8682-e64bb02c697d\" (UID: \"04302f0d-411c-49b0-8682-e64bb02c697d\") " Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.416403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv" (OuterVolumeSpecName: "kube-api-access-pp5kv") pod "04302f0d-411c-49b0-8682-e64bb02c697d" (UID: "04302f0d-411c-49b0-8682-e64bb02c697d"). InnerVolumeSpecName "kube-api-access-pp5kv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:10:04 crc kubenswrapper[5136]: I0320 08:10:04.512926 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp5kv\" (UniqueName: \"kubernetes.io/projected/04302f0d-411c-49b0-8682-e64bb02c697d-kube-api-access-pp5kv\") on node \"crc\" DevicePath \"\"" Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.008225 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" event={"ID":"04302f0d-411c-49b0-8682-e64bb02c697d","Type":"ContainerDied","Data":"f96bfa114c776de7a58b86b2866b3939409e0f7882cc08dd6b4d8966fd6a33dd"} Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.008274 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f96bfa114c776de7a58b86b2866b3939409e0f7882cc08dd6b4d8966fd6a33dd" Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.008294 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566570-hgkrr" Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.392353 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:10:05 crc kubenswrapper[5136]: I0320 08:10:05.397320 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566564-wps8c"] Mar 20 08:10:06 crc kubenswrapper[5136]: I0320 08:10:06.407467 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd0441d6-4822-4c0e-b72c-b33d59e4a81b" path="/var/lib/kubelet/pods/dd0441d6-4822-4c0e-b72c-b33d59e4a81b/volumes" Mar 20 08:10:11 crc kubenswrapper[5136]: I0320 08:10:11.397302 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:11 crc kubenswrapper[5136]: E0320 08:10:11.398144 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:25 crc kubenswrapper[5136]: I0320 08:10:25.397258 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:25 crc kubenswrapper[5136]: E0320 08:10:25.398005 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:36 crc kubenswrapper[5136]: I0320 08:10:36.138670 5136 scope.go:117] "RemoveContainer" containerID="a8db32019a3eb6483d295a328515205a1810920d8bfa5e500df3dffc05d44642" Mar 20 08:10:38 crc kubenswrapper[5136]: I0320 08:10:38.404353 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:38 crc kubenswrapper[5136]: E0320 08:10:38.405154 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:52 crc kubenswrapper[5136]: I0320 08:10:52.397347 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:10:52 crc kubenswrapper[5136]: E0320 08:10:52.398409 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.455507 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:10:56 crc kubenswrapper[5136]: E0320 08:10:56.457192 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" containerName="oc" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.457226 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" containerName="oc" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.457490 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" containerName="oc" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.458806 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.462139 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.514887 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.515013 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.515054 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.616522 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.616582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.616602 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.617136 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.617162 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.644287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"certified-operators-rmw4k\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:56 crc kubenswrapper[5136]: I0320 08:10:56.775736 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.237742 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.425396 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" exitCode=0 Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.425445 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf"} Mar 20 08:10:57 crc kubenswrapper[5136]: I0320 08:10:57.425495 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerStarted","Data":"90bdd832502596574851d98ed6ffbdebe2040ef137ca2eebb9103a5a610cac71"} Mar 20 08:10:58 crc kubenswrapper[5136]: I0320 08:10:58.438368 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerStarted","Data":"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233"} Mar 20 08:10:59 crc kubenswrapper[5136]: I0320 08:10:59.451601 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" exitCode=0 Mar 20 08:10:59 crc kubenswrapper[5136]: I0320 08:10:59.451668 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233"} Mar 20 08:11:00 crc kubenswrapper[5136]: I0320 08:11:00.460360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerStarted","Data":"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0"} Mar 20 08:11:00 crc kubenswrapper[5136]: I0320 08:11:00.480626 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-rmw4k" podStartSLOduration=2.038028789 podStartE2EDuration="4.480606063s" podCreationTimestamp="2026-03-20 08:10:56 +0000 UTC" firstStartedPulling="2026-03-20 08:10:57.426681555 +0000 UTC m=+4889.685992706" lastFinishedPulling="2026-03-20 08:10:59.869258789 +0000 UTC m=+4892.128569980" observedRunningTime="2026-03-20 08:11:00.480245503 +0000 UTC m=+4892.739556664" watchObservedRunningTime="2026-03-20 08:11:00.480606063 +0000 UTC m=+4892.739917214" Mar 20 08:11:05 crc kubenswrapper[5136]: I0320 08:11:05.396276 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:05 crc kubenswrapper[5136]: E0320 08:11:05.396749 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:11:06 crc kubenswrapper[5136]: I0320 08:11:06.777407 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:06 crc kubenswrapper[5136]: I0320 08:11:06.778085 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:06 crc kubenswrapper[5136]: I0320 08:11:06.852610 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:07 crc kubenswrapper[5136]: I0320 08:11:07.564555 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:07 crc kubenswrapper[5136]: I0320 08:11:07.619664 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:11:09 crc kubenswrapper[5136]: I0320 08:11:09.537276 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-rmw4k" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" containerID="cri-o://47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" gracePeriod=2 Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.371019 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.525162 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") pod \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.525536 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") pod \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.525904 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") pod \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\" (UID: \"ba6e1a2e-96ff-4c0b-b86a-9c948d147361\") " Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.527360 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities" (OuterVolumeSpecName: "utilities") pod "ba6e1a2e-96ff-4c0b-b86a-9c948d147361" (UID: "ba6e1a2e-96ff-4c0b-b86a-9c948d147361"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.532936 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85" (OuterVolumeSpecName: "kube-api-access-grr85") pod "ba6e1a2e-96ff-4c0b-b86a-9c948d147361" (UID: "ba6e1a2e-96ff-4c0b-b86a-9c948d147361"). InnerVolumeSpecName "kube-api-access-grr85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546023 5136 generic.go:334] "Generic (PLEG): container finished" podID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" exitCode=0 Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546068 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0"} Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546097 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-rmw4k" event={"ID":"ba6e1a2e-96ff-4c0b-b86a-9c948d147361","Type":"ContainerDied","Data":"90bdd832502596574851d98ed6ffbdebe2040ef137ca2eebb9103a5a610cac71"} Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546118 5136 scope.go:117] "RemoveContainer" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.546253 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-rmw4k" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.580472 5136 scope.go:117] "RemoveContainer" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.588061 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ba6e1a2e-96ff-4c0b-b86a-9c948d147361" (UID: "ba6e1a2e-96ff-4c0b-b86a-9c948d147361"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.599399 5136 scope.go:117] "RemoveContainer" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.627345 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.627374 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grr85\" (UniqueName: \"kubernetes.io/projected/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-kube-api-access-grr85\") on node \"crc\" DevicePath \"\"" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.627387 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ba6e1a2e-96ff-4c0b-b86a-9c948d147361-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.633690 5136 scope.go:117] "RemoveContainer" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" Mar 20 08:11:10 crc kubenswrapper[5136]: E0320 08:11:10.634149 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0\": container with ID starting with 47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0 not found: ID does not exist" containerID="47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634187 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0"} err="failed to get container status \"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0\": rpc error: code = NotFound desc = could not find container \"47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0\": container with ID starting with 47daacead4d75fbd7e009c9e6c683245084de651fa2a44f3f368c07fed93f3c0 not found: ID does not exist" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634210 5136 scope.go:117] "RemoveContainer" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" Mar 20 08:11:10 crc kubenswrapper[5136]: E0320 08:11:10.634538 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233\": container with ID starting with 701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233 not found: ID does not exist" containerID="701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634585 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233"} err="failed to get container status \"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233\": rpc error: code = NotFound desc = could not find container \"701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233\": container with ID starting with 701754c475b6e9d6d6bddb33807bd69e1788d1e9f1de5dea48a6211ca14f0233 not found: ID does not exist" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634617 5136 scope.go:117] "RemoveContainer" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" Mar 20 08:11:10 crc kubenswrapper[5136]: E0320 08:11:10.634961 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf\": container with ID starting with 270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf not found: ID does not exist" containerID="270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.634990 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf"} err="failed to get container status \"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf\": rpc error: code = NotFound desc = could not find container \"270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf\": container with ID starting with 270d94688957ff316448089c0328c19eaf4b0d8943c97e22ecaab09c199df8cf not found: ID does not exist" Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.880010 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:11:10 crc kubenswrapper[5136]: I0320 08:11:10.888336 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-rmw4k"] Mar 20 08:11:12 crc kubenswrapper[5136]: I0320 08:11:12.407406 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" path="/var/lib/kubelet/pods/ba6e1a2e-96ff-4c0b-b86a-9c948d147361/volumes" Mar 20 08:11:20 crc kubenswrapper[5136]: I0320 08:11:20.397103 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:20 crc kubenswrapper[5136]: E0320 08:11:20.397673 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:11:35 crc kubenswrapper[5136]: I0320 08:11:35.396866 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:35 crc kubenswrapper[5136]: E0320 08:11:35.398003 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:11:49 crc kubenswrapper[5136]: I0320 08:11:49.396786 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:11:49 crc kubenswrapper[5136]: E0320 08:11:49.397493 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.172976 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:12:00 crc kubenswrapper[5136]: E0320 08:12:00.174338 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-content" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174373 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-content" Mar 20 08:12:00 crc kubenswrapper[5136]: E0320 08:12:00.174441 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174461 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" Mar 20 08:12:00 crc kubenswrapper[5136]: E0320 08:12:00.174514 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-utilities" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174534 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="extract-utilities" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.174915 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba6e1a2e-96ff-4c0b-b86a-9c948d147361" containerName="registry-server" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.175985 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.181224 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.181346 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.182168 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.184235 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.309861 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"auto-csr-approver-29566572-59rnc\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.411618 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"auto-csr-approver-29566572-59rnc\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.449039 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"auto-csr-approver-29566572-59rnc\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.505875 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.932668 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:12:00 crc kubenswrapper[5136]: I0320 08:12:00.972500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566572-59rnc" event={"ID":"e251183d-ffbf-414f-9d88-5830637722be","Type":"ContainerStarted","Data":"ac444ccd6467e4f8466b4aea1cc2a5776eaaba3c507a635df062455df3f23bf0"} Mar 20 08:12:02 crc kubenswrapper[5136]: I0320 08:12:02.993980 5136 generic.go:334] "Generic (PLEG): container finished" podID="e251183d-ffbf-414f-9d88-5830637722be" containerID="b9689c90ff59dd42fc4279d62977b62b1e34234f2f912a119c0aaec47a889e16" exitCode=0 Mar 20 08:12:02 crc kubenswrapper[5136]: I0320 08:12:02.994034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566572-59rnc" event={"ID":"e251183d-ffbf-414f-9d88-5830637722be","Type":"ContainerDied","Data":"b9689c90ff59dd42fc4279d62977b62b1e34234f2f912a119c0aaec47a889e16"} Mar 20 08:12:03 crc kubenswrapper[5136]: I0320 08:12:03.397491 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:12:03 crc kubenswrapper[5136]: E0320 08:12:03.397998 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.308325 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.476007 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") pod \"e251183d-ffbf-414f-9d88-5830637722be\" (UID: \"e251183d-ffbf-414f-9d88-5830637722be\") " Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.489716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28" (OuterVolumeSpecName: "kube-api-access-hmc28") pod "e251183d-ffbf-414f-9d88-5830637722be" (UID: "e251183d-ffbf-414f-9d88-5830637722be"). InnerVolumeSpecName "kube-api-access-hmc28". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:12:04 crc kubenswrapper[5136]: I0320 08:12:04.577236 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hmc28\" (UniqueName: \"kubernetes.io/projected/e251183d-ffbf-414f-9d88-5830637722be-kube-api-access-hmc28\") on node \"crc\" DevicePath \"\"" Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.022351 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566572-59rnc" event={"ID":"e251183d-ffbf-414f-9d88-5830637722be","Type":"ContainerDied","Data":"ac444ccd6467e4f8466b4aea1cc2a5776eaaba3c507a635df062455df3f23bf0"} Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.022410 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac444ccd6467e4f8466b4aea1cc2a5776eaaba3c507a635df062455df3f23bf0" Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.022453 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566572-59rnc" Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.399505 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:12:05 crc kubenswrapper[5136]: I0320 08:12:05.411352 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566566-tj6mv"] Mar 20 08:12:06 crc kubenswrapper[5136]: I0320 08:12:06.414279 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1171863e-bf58-4961-a881-403e291cc93a" path="/var/lib/kubelet/pods/1171863e-bf58-4961-a881-403e291cc93a/volumes" Mar 20 08:12:15 crc kubenswrapper[5136]: I0320 08:12:15.404081 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:12:15 crc kubenswrapper[5136]: E0320 08:12:15.405096 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:12:29 crc kubenswrapper[5136]: I0320 08:12:29.396651 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:12:30 crc kubenswrapper[5136]: I0320 08:12:30.251690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14"} Mar 20 08:12:36 crc kubenswrapper[5136]: I0320 08:12:36.272090 5136 scope.go:117] "RemoveContainer" containerID="b1deef80cd1c3d1582469b2cb38e1b1a394ed2e7b6171fb2539451da0bf3a162" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.175727 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:14:00 crc kubenswrapper[5136]: E0320 08:14:00.176937 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e251183d-ffbf-414f-9d88-5830637722be" containerName="oc" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.176959 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e251183d-ffbf-414f-9d88-5830637722be" containerName="oc" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.177162 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e251183d-ffbf-414f-9d88-5830637722be" containerName="oc" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.177803 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.184261 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.184321 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.184422 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.189836 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.268266 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"auto-csr-approver-29566574-kgksp\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.370042 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"auto-csr-approver-29566574-kgksp\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.402918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"auto-csr-approver-29566574-kgksp\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.512222 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.754227 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.766838 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:14:00 crc kubenswrapper[5136]: I0320 08:14:00.963166 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerStarted","Data":"a16e247caaaa1e1b4ac37bc80fe9cb32bc37aaad6451716e9c3b0896807d8606"} Mar 20 08:14:01 crc kubenswrapper[5136]: I0320 08:14:01.971184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerStarted","Data":"f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4"} Mar 20 08:14:01 crc kubenswrapper[5136]: I0320 08:14:01.989834 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566574-kgksp" podStartSLOduration=1.115872327 podStartE2EDuration="1.98979554s" podCreationTimestamp="2026-03-20 08:14:00 +0000 UTC" firstStartedPulling="2026-03-20 08:14:00.766552882 +0000 UTC m=+5073.025864033" lastFinishedPulling="2026-03-20 08:14:01.640476095 +0000 UTC m=+5073.899787246" observedRunningTime="2026-03-20 08:14:01.984862338 +0000 UTC m=+5074.244173509" watchObservedRunningTime="2026-03-20 08:14:01.98979554 +0000 UTC m=+5074.249106711" Mar 20 08:14:02 crc kubenswrapper[5136]: I0320 08:14:02.980891 5136 generic.go:334] "Generic (PLEG): container finished" podID="aa57e02b-5eb6-401e-997d-a451c285486e" containerID="f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4" exitCode=0 Mar 20 08:14:02 crc kubenswrapper[5136]: I0320 08:14:02.980954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerDied","Data":"f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4"} Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.362446 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.550799 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") pod \"aa57e02b-5eb6-401e-997d-a451c285486e\" (UID: \"aa57e02b-5eb6-401e-997d-a451c285486e\") " Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.556805 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm" (OuterVolumeSpecName: "kube-api-access-7x2nm") pod "aa57e02b-5eb6-401e-997d-a451c285486e" (UID: "aa57e02b-5eb6-401e-997d-a451c285486e"). InnerVolumeSpecName "kube-api-access-7x2nm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:14:04 crc kubenswrapper[5136]: I0320 08:14:04.652558 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7x2nm\" (UniqueName: \"kubernetes.io/projected/aa57e02b-5eb6-401e-997d-a451c285486e-kube-api-access-7x2nm\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.006084 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566574-kgksp" event={"ID":"aa57e02b-5eb6-401e-997d-a451c285486e","Type":"ContainerDied","Data":"a16e247caaaa1e1b4ac37bc80fe9cb32bc37aaad6451716e9c3b0896807d8606"} Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.006136 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a16e247caaaa1e1b4ac37bc80fe9cb32bc37aaad6451716e9c3b0896807d8606" Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.006202 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566574-kgksp" Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.052985 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:14:05 crc kubenswrapper[5136]: I0320 08:14:05.059409 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566568-t2xwb"] Mar 20 08:14:06 crc kubenswrapper[5136]: I0320 08:14:06.409452 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b174d612-6f70-49f1-a024-93c2a9bd0824" path="/var/lib/kubelet/pods/b174d612-6f70-49f1-a024-93c2a9bd0824/volumes" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.832564 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:12 crc kubenswrapper[5136]: E0320 08:14:12.833246 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" containerName="oc" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.833257 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" containerName="oc" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.833409 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" containerName="oc" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.835129 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.849313 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.976840 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.977184 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:12 crc kubenswrapper[5136]: I0320 08:14:12.977339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.078525 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.078590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.078626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.079216 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.079373 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.095550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"community-operators-x2f65\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.172685 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:13 crc kubenswrapper[5136]: I0320 08:14:13.692616 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:14 crc kubenswrapper[5136]: I0320 08:14:14.083272 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c4048be-e11d-4237-a81a-abf158f5769c" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" exitCode=0 Mar 20 08:14:14 crc kubenswrapper[5136]: I0320 08:14:14.083344 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066"} Mar 20 08:14:14 crc kubenswrapper[5136]: I0320 08:14:14.083599 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerStarted","Data":"9e02d4de71743bfbef48bca2789ad9c6ca17dee5f5ffd60b25fb33f56e0796df"} Mar 20 08:14:15 crc kubenswrapper[5136]: I0320 08:14:15.093897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerStarted","Data":"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee"} Mar 20 08:14:16 crc kubenswrapper[5136]: I0320 08:14:16.107605 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c4048be-e11d-4237-a81a-abf158f5769c" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" exitCode=0 Mar 20 08:14:16 crc kubenswrapper[5136]: I0320 08:14:16.107670 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee"} Mar 20 08:14:17 crc kubenswrapper[5136]: I0320 08:14:17.118148 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerStarted","Data":"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5"} Mar 20 08:14:17 crc kubenswrapper[5136]: I0320 08:14:17.164017 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-x2f65" podStartSLOduration=2.507148705 podStartE2EDuration="5.164000348s" podCreationTimestamp="2026-03-20 08:14:12 +0000 UTC" firstStartedPulling="2026-03-20 08:14:14.08533617 +0000 UTC m=+5086.344647321" lastFinishedPulling="2026-03-20 08:14:16.742187783 +0000 UTC m=+5089.001498964" observedRunningTime="2026-03-20 08:14:17.162174621 +0000 UTC m=+5089.421485772" watchObservedRunningTime="2026-03-20 08:14:17.164000348 +0000 UTC m=+5089.423311499" Mar 20 08:14:23 crc kubenswrapper[5136]: I0320 08:14:23.174225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:23 crc kubenswrapper[5136]: I0320 08:14:23.174280 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:23 crc kubenswrapper[5136]: I0320 08:14:23.236608 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:24 crc kubenswrapper[5136]: I0320 08:14:24.223592 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:24 crc kubenswrapper[5136]: I0320 08:14:24.285747 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.190405 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-x2f65" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" containerID="cri-o://cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" gracePeriod=2 Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.626957 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.689774 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") pod \"5c4048be-e11d-4237-a81a-abf158f5769c\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.690428 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") pod \"5c4048be-e11d-4237-a81a-abf158f5769c\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.690541 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") pod \"5c4048be-e11d-4237-a81a-abf158f5769c\" (UID: \"5c4048be-e11d-4237-a81a-abf158f5769c\") " Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.693144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities" (OuterVolumeSpecName: "utilities") pod "5c4048be-e11d-4237-a81a-abf158f5769c" (UID: "5c4048be-e11d-4237-a81a-abf158f5769c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.704133 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl" (OuterVolumeSpecName: "kube-api-access-lgrrl") pod "5c4048be-e11d-4237-a81a-abf158f5769c" (UID: "5c4048be-e11d-4237-a81a-abf158f5769c"). InnerVolumeSpecName "kube-api-access-lgrrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.774319 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c4048be-e11d-4237-a81a-abf158f5769c" (UID: "5c4048be-e11d-4237-a81a-abf158f5769c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.792195 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.792231 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgrrl\" (UniqueName: \"kubernetes.io/projected/5c4048be-e11d-4237-a81a-abf158f5769c-kube-api-access-lgrrl\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:26 crc kubenswrapper[5136]: I0320 08:14:26.792242 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c4048be-e11d-4237-a81a-abf158f5769c-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.199880 5136 generic.go:334] "Generic (PLEG): container finished" podID="5c4048be-e11d-4237-a81a-abf158f5769c" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" exitCode=0 Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.199937 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5"} Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.200001 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-x2f65" event={"ID":"5c4048be-e11d-4237-a81a-abf158f5769c","Type":"ContainerDied","Data":"9e02d4de71743bfbef48bca2789ad9c6ca17dee5f5ffd60b25fb33f56e0796df"} Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.200023 5136 scope.go:117] "RemoveContainer" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.199954 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-x2f65" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.218040 5136 scope.go:117] "RemoveContainer" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.236491 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.241781 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-x2f65"] Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.271488 5136 scope.go:117] "RemoveContainer" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.288748 5136 scope.go:117] "RemoveContainer" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" Mar 20 08:14:27 crc kubenswrapper[5136]: E0320 08:14:27.289174 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5\": container with ID starting with cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5 not found: ID does not exist" containerID="cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289212 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5"} err="failed to get container status \"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5\": rpc error: code = NotFound desc = could not find container \"cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5\": container with ID starting with cbc6a5a5977bd4361f0a06a15b6f0c484fbe479f399b542d448a878ae46d7ed5 not found: ID does not exist" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289231 5136 scope.go:117] "RemoveContainer" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" Mar 20 08:14:27 crc kubenswrapper[5136]: E0320 08:14:27.289500 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee\": container with ID starting with bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee not found: ID does not exist" containerID="bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289521 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee"} err="failed to get container status \"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee\": rpc error: code = NotFound desc = could not find container \"bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee\": container with ID starting with bbc1298cb6948daace9e5ed6677563dff19cc7e5289b900c77b58ab5b38150ee not found: ID does not exist" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289534 5136 scope.go:117] "RemoveContainer" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" Mar 20 08:14:27 crc kubenswrapper[5136]: E0320 08:14:27.289736 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066\": container with ID starting with d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066 not found: ID does not exist" containerID="d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066" Mar 20 08:14:27 crc kubenswrapper[5136]: I0320 08:14:27.289751 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066"} err="failed to get container status \"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066\": rpc error: code = NotFound desc = could not find container \"d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066\": container with ID starting with d6225c9cedc3abddc5b8e87b8a7898daf5b7af1e662ea5e3d463ad89f159f066 not found: ID does not exist" Mar 20 08:14:28 crc kubenswrapper[5136]: I0320 08:14:28.411630 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" path="/var/lib/kubelet/pods/5c4048be-e11d-4237-a81a-abf158f5769c/volumes" Mar 20 08:14:36 crc kubenswrapper[5136]: I0320 08:14:36.395151 5136 scope.go:117] "RemoveContainer" containerID="cfc59d82836f1e5aa8be6bb29641caa9e94e4841e523822550b31308b0957aae" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.748089 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:43 crc kubenswrapper[5136]: E0320 08:14:43.748994 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-utilities" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749009 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-utilities" Mar 20 08:14:43 crc kubenswrapper[5136]: E0320 08:14:43.749035 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-content" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749044 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="extract-content" Mar 20 08:14:43 crc kubenswrapper[5136]: E0320 08:14:43.749056 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749066 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.749225 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4048be-e11d-4237-a81a-abf158f5769c" containerName="registry-server" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.750360 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.766165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.851448 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.851732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.851833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953155 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953629 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.953667 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:43 crc kubenswrapper[5136]: I0320 08:14:43.983061 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"redhat-marketplace-nmr9m\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:44 crc kubenswrapper[5136]: I0320 08:14:44.078191 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:44 crc kubenswrapper[5136]: I0320 08:14:44.485059 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:44 crc kubenswrapper[5136]: W0320 08:14:44.491561 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7c24947f_946c_46e2_b9f5_0ec67f66a8fa.slice/crio-824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e WatchSource:0}: Error finding container 824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e: Status 404 returned error can't find the container with id 824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.348732 5136 generic.go:334] "Generic (PLEG): container finished" podID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" exitCode=0 Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.348877 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093"} Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.349231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerStarted","Data":"824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e"} Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.822208 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:14:45 crc kubenswrapper[5136]: I0320 08:14:45.822293 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:14:46 crc kubenswrapper[5136]: I0320 08:14:46.357993 5136 generic.go:334] "Generic (PLEG): container finished" podID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" exitCode=0 Mar 20 08:14:46 crc kubenswrapper[5136]: I0320 08:14:46.358043 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee"} Mar 20 08:14:47 crc kubenswrapper[5136]: I0320 08:14:47.368030 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerStarted","Data":"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed"} Mar 20 08:14:47 crc kubenswrapper[5136]: I0320 08:14:47.390427 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nmr9m" podStartSLOduration=2.922353106 podStartE2EDuration="4.390405954s" podCreationTimestamp="2026-03-20 08:14:43 +0000 UTC" firstStartedPulling="2026-03-20 08:14:45.350582222 +0000 UTC m=+5117.609893413" lastFinishedPulling="2026-03-20 08:14:46.8186351 +0000 UTC m=+5119.077946261" observedRunningTime="2026-03-20 08:14:47.389795775 +0000 UTC m=+5119.649106976" watchObservedRunningTime="2026-03-20 08:14:47.390405954 +0000 UTC m=+5119.649717125" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.078646 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.079075 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.132174 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.518580 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:54 crc kubenswrapper[5136]: I0320 08:14:54.570708 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:56 crc kubenswrapper[5136]: I0320 08:14:56.433581 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nmr9m" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" containerID="cri-o://ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" gracePeriod=2 Mar 20 08:14:56 crc kubenswrapper[5136]: I0320 08:14:56.938230 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.067471 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") pod \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.067555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") pod \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.067635 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") pod \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\" (UID: \"7c24947f-946c-46e2-b9f5-0ec67f66a8fa\") " Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.071053 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities" (OuterVolumeSpecName: "utilities") pod "7c24947f-946c-46e2-b9f5-0ec67f66a8fa" (UID: "7c24947f-946c-46e2-b9f5-0ec67f66a8fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.076219 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45" (OuterVolumeSpecName: "kube-api-access-d4l45") pod "7c24947f-946c-46e2-b9f5-0ec67f66a8fa" (UID: "7c24947f-946c-46e2-b9f5-0ec67f66a8fa"). InnerVolumeSpecName "kube-api-access-d4l45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.091444 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7c24947f-946c-46e2-b9f5-0ec67f66a8fa" (UID: "7c24947f-946c-46e2-b9f5-0ec67f66a8fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.170091 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4l45\" (UniqueName: \"kubernetes.io/projected/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-kube-api-access-d4l45\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.170132 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.170146 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7c24947f-946c-46e2-b9f5-0ec67f66a8fa-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448541 5136 generic.go:334] "Generic (PLEG): container finished" podID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" exitCode=0 Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448664 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nmr9m" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed"} Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448925 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nmr9m" event={"ID":"7c24947f-946c-46e2-b9f5-0ec67f66a8fa","Type":"ContainerDied","Data":"824c727394022780fe78bc03117fbbf21f94848a2fad49540b8e049f0ac9e77e"} Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.448970 5136 scope.go:117] "RemoveContainer" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.474997 5136 scope.go:117] "RemoveContainer" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.505878 5136 scope.go:117] "RemoveContainer" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.508746 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.516106 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nmr9m"] Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.542656 5136 scope.go:117] "RemoveContainer" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" Mar 20 08:14:57 crc kubenswrapper[5136]: E0320 08:14:57.543375 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed\": container with ID starting with ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed not found: ID does not exist" containerID="ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.543423 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed"} err="failed to get container status \"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed\": rpc error: code = NotFound desc = could not find container \"ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed\": container with ID starting with ab3f2684041e19f28de1dd890f250986fd7effeec3fc00b7589acd7258fbcfed not found: ID does not exist" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.543479 5136 scope.go:117] "RemoveContainer" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" Mar 20 08:14:57 crc kubenswrapper[5136]: E0320 08:14:57.544045 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee\": container with ID starting with af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee not found: ID does not exist" containerID="af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.544078 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee"} err="failed to get container status \"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee\": rpc error: code = NotFound desc = could not find container \"af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee\": container with ID starting with af7ccb5884d8d6850a0338f1d69afbf8f5ee1cd293c6cfe06e1d05671daaf8ee not found: ID does not exist" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.544123 5136 scope.go:117] "RemoveContainer" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" Mar 20 08:14:57 crc kubenswrapper[5136]: E0320 08:14:57.544645 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093\": container with ID starting with 84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093 not found: ID does not exist" containerID="84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093" Mar 20 08:14:57 crc kubenswrapper[5136]: I0320 08:14:57.544726 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093"} err="failed to get container status \"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093\": rpc error: code = NotFound desc = could not find container \"84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093\": container with ID starting with 84b89ab79cf289c6e3a1407fa37c81d9fd2d3657b7ef6f42c22ab525f9790093 not found: ID does not exist" Mar 20 08:14:58 crc kubenswrapper[5136]: I0320 08:14:58.410924 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" path="/var/lib/kubelet/pods/7c24947f-946c-46e2-b9f5-0ec67f66a8fa/volumes" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155519 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 08:15:00 crc kubenswrapper[5136]: E0320 08:15:00.155893 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-content" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155913 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-content" Mar 20 08:15:00 crc kubenswrapper[5136]: E0320 08:15:00.155949 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155961 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" Mar 20 08:15:00 crc kubenswrapper[5136]: E0320 08:15:00.155983 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-utilities" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.155993 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="extract-utilities" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.156227 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c24947f-946c-46e2-b9f5-0ec67f66a8fa" containerName="registry-server" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.156909 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.159188 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.159864 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.172514 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.224134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.224267 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.224489 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.326295 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.326415 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.326568 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.327585 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.339596 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.348595 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"collect-profiles-29566575-8lvms\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.487248 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:00 crc kubenswrapper[5136]: I0320 08:15:00.742047 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 08:15:01 crc kubenswrapper[5136]: I0320 08:15:01.478291 5136 generic.go:334] "Generic (PLEG): container finished" podID="02161682-1526-46e0-aaa6-d09c6758943c" containerID="d7c966f182c94b6eabaca701ac9e2f115b1d66510a14ffb108fa112317b9c2d8" exitCode=0 Mar 20 08:15:01 crc kubenswrapper[5136]: I0320 08:15:01.478343 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" event={"ID":"02161682-1526-46e0-aaa6-d09c6758943c","Type":"ContainerDied","Data":"d7c966f182c94b6eabaca701ac9e2f115b1d66510a14ffb108fa112317b9c2d8"} Mar 20 08:15:01 crc kubenswrapper[5136]: I0320 08:15:01.478667 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" event={"ID":"02161682-1526-46e0-aaa6-d09c6758943c","Type":"ContainerStarted","Data":"dfa3996b6fa9b12331f1b2253e9996c6494379a9aacdaf656de0c773aca3ee4c"} Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.838587 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.985151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") pod \"02161682-1526-46e0-aaa6-d09c6758943c\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.985296 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") pod \"02161682-1526-46e0-aaa6-d09c6758943c\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.985408 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") pod \"02161682-1526-46e0-aaa6-d09c6758943c\" (UID: \"02161682-1526-46e0-aaa6-d09c6758943c\") " Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.987770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume" (OuterVolumeSpecName: "config-volume") pod "02161682-1526-46e0-aaa6-d09c6758943c" (UID: "02161682-1526-46e0-aaa6-d09c6758943c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.993263 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "02161682-1526-46e0-aaa6-d09c6758943c" (UID: "02161682-1526-46e0-aaa6-d09c6758943c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:15:02 crc kubenswrapper[5136]: I0320 08:15:02.993892 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4" (OuterVolumeSpecName: "kube-api-access-hwnk4") pod "02161682-1526-46e0-aaa6-d09c6758943c" (UID: "02161682-1526-46e0-aaa6-d09c6758943c"). InnerVolumeSpecName "kube-api-access-hwnk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.087432 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/02161682-1526-46e0-aaa6-d09c6758943c-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.087480 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/02161682-1526-46e0-aaa6-d09c6758943c-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.087533 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwnk4\" (UniqueName: \"kubernetes.io/projected/02161682-1526-46e0-aaa6-d09c6758943c-kube-api-access-hwnk4\") on node \"crc\" DevicePath \"\"" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.498055 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" event={"ID":"02161682-1526-46e0-aaa6-d09c6758943c","Type":"ContainerDied","Data":"dfa3996b6fa9b12331f1b2253e9996c6494379a9aacdaf656de0c773aca3ee4c"} Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.498579 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfa3996b6fa9b12331f1b2253e9996c6494379a9aacdaf656de0c773aca3ee4c" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.498162 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms" Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.922630 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 08:15:03 crc kubenswrapper[5136]: I0320 08:15:03.927935 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566530-b9fhz"] Mar 20 08:15:04 crc kubenswrapper[5136]: I0320 08:15:04.404565 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d251ba65-cac2-4d94-b882-672d97a85bc7" path="/var/lib/kubelet/pods/d251ba65-cac2-4d94-b882-672d97a85bc7/volumes" Mar 20 08:15:15 crc kubenswrapper[5136]: I0320 08:15:15.822144 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:15:15 crc kubenswrapper[5136]: I0320 08:15:15.822721 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:15:36 crc kubenswrapper[5136]: I0320 08:15:36.513756 5136 scope.go:117] "RemoveContainer" containerID="ca492a12ee4dfef81804d9a43645add86ef8ab0ce16812e4c74a09d17ae0ea3c" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.822340 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.823390 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.823460 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.824629 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:15:45 crc kubenswrapper[5136]: I0320 08:15:45.824773 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14" gracePeriod=600 Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.852537 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14" exitCode=0 Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.852745 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14"} Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.853045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2"} Mar 20 08:15:46 crc kubenswrapper[5136]: I0320 08:15:46.853080 5136 scope.go:117] "RemoveContainer" containerID="58d5e6ba7995c9f8a2bcaa77699d916a5f4006f6828d2d031917c6aeba779a82" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.168278 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:16:00 crc kubenswrapper[5136]: E0320 08:16:00.170899 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02161682-1526-46e0-aaa6-d09c6758943c" containerName="collect-profiles" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.171113 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="02161682-1526-46e0-aaa6-d09c6758943c" containerName="collect-profiles" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.171541 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="02161682-1526-46e0-aaa6-d09c6758943c" containerName="collect-profiles" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.172480 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.175753 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.176167 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.179159 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.180103 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.357569 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"auto-csr-approver-29566576-lnp57\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.459386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"auto-csr-approver-29566576-lnp57\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.486856 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"auto-csr-approver-29566576-lnp57\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.497896 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.810280 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:16:00 crc kubenswrapper[5136]: I0320 08:16:00.962773 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerStarted","Data":"3b60df99619254a8fb5e9271c49f5f3626f2ec90aa4ae01070cc7bdd6c03e997"} Mar 20 08:16:01 crc kubenswrapper[5136]: I0320 08:16:01.971125 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerStarted","Data":"62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c"} Mar 20 08:16:01 crc kubenswrapper[5136]: I0320 08:16:01.986486 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566576-lnp57" podStartSLOduration=1.191945061 podStartE2EDuration="1.986455182s" podCreationTimestamp="2026-03-20 08:16:00 +0000 UTC" firstStartedPulling="2026-03-20 08:16:00.815047379 +0000 UTC m=+5193.074358530" lastFinishedPulling="2026-03-20 08:16:01.60955746 +0000 UTC m=+5193.868868651" observedRunningTime="2026-03-20 08:16:01.985327098 +0000 UTC m=+5194.244638259" watchObservedRunningTime="2026-03-20 08:16:01.986455182 +0000 UTC m=+5194.245766333" Mar 20 08:16:02 crc kubenswrapper[5136]: I0320 08:16:02.981957 5136 generic.go:334] "Generic (PLEG): container finished" podID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerID="62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c" exitCode=0 Mar 20 08:16:02 crc kubenswrapper[5136]: I0320 08:16:02.982090 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerDied","Data":"62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c"} Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.263340 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.411977 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") pod \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\" (UID: \"cc65bb98-68d8-471f-82de-50eba3ccfd7d\") " Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.420955 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7" (OuterVolumeSpecName: "kube-api-access-6vwl7") pod "cc65bb98-68d8-471f-82de-50eba3ccfd7d" (UID: "cc65bb98-68d8-471f-82de-50eba3ccfd7d"). InnerVolumeSpecName "kube-api-access-6vwl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:16:04 crc kubenswrapper[5136]: I0320 08:16:04.513356 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vwl7\" (UniqueName: \"kubernetes.io/projected/cc65bb98-68d8-471f-82de-50eba3ccfd7d-kube-api-access-6vwl7\") on node \"crc\" DevicePath \"\"" Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.011323 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566576-lnp57" event={"ID":"cc65bb98-68d8-471f-82de-50eba3ccfd7d","Type":"ContainerDied","Data":"3b60df99619254a8fb5e9271c49f5f3626f2ec90aa4ae01070cc7bdd6c03e997"} Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.011368 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b60df99619254a8fb5e9271c49f5f3626f2ec90aa4ae01070cc7bdd6c03e997" Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.011437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566576-lnp57" Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.084179 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:16:05 crc kubenswrapper[5136]: I0320 08:16:05.091906 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566570-hgkrr"] Mar 20 08:16:06 crc kubenswrapper[5136]: I0320 08:16:06.405554 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04302f0d-411c-49b0-8682-e64bb02c697d" path="/var/lib/kubelet/pods/04302f0d-411c-49b0-8682-e64bb02c697d/volumes" Mar 20 08:16:36 crc kubenswrapper[5136]: I0320 08:16:36.579888 5136 scope.go:117] "RemoveContainer" containerID="79d8ba8cc4cde24163fe3c378a9767a3b425536e543bf53f50b55aaf7f5ba019" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.148154 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:18:00 crc kubenswrapper[5136]: E0320 08:18:00.149286 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerName="oc" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.149309 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerName="oc" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.149567 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" containerName="oc" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.150334 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.153460 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.153800 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.154624 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.170846 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.292244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"auto-csr-approver-29566578-ck7bv\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.393552 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"auto-csr-approver-29566578-ck7bv\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.416356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"auto-csr-approver-29566578-ck7bv\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.474964 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.865050 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:18:00 crc kubenswrapper[5136]: I0320 08:18:00.948333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" event={"ID":"e55d749f-3c3e-4558-bf74-28a388d382bf","Type":"ContainerStarted","Data":"a49c32499696a833258bc75f86dccb4a476c756aeddc3a977aabab111a094291"} Mar 20 08:18:02 crc kubenswrapper[5136]: I0320 08:18:02.969372 5136 generic.go:334] "Generic (PLEG): container finished" podID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerID="4e591fa4a3dca1b218c4cdb5e1a7771e6b255424f6feef9dba08785df4cee785" exitCode=0 Mar 20 08:18:02 crc kubenswrapper[5136]: I0320 08:18:02.969455 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" event={"ID":"e55d749f-3c3e-4558-bf74-28a388d382bf","Type":"ContainerDied","Data":"4e591fa4a3dca1b218c4cdb5e1a7771e6b255424f6feef9dba08785df4cee785"} Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.264062 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.353002 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") pod \"e55d749f-3c3e-4558-bf74-28a388d382bf\" (UID: \"e55d749f-3c3e-4558-bf74-28a388d382bf\") " Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.357567 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx" (OuterVolumeSpecName: "kube-api-access-xkvlx") pod "e55d749f-3c3e-4558-bf74-28a388d382bf" (UID: "e55d749f-3c3e-4558-bf74-28a388d382bf"). InnerVolumeSpecName "kube-api-access-xkvlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.455406 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkvlx\" (UniqueName: \"kubernetes.io/projected/e55d749f-3c3e-4558-bf74-28a388d382bf-kube-api-access-xkvlx\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.993553 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" event={"ID":"e55d749f-3c3e-4558-bf74-28a388d382bf","Type":"ContainerDied","Data":"a49c32499696a833258bc75f86dccb4a476c756aeddc3a977aabab111a094291"} Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.993992 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a49c32499696a833258bc75f86dccb4a476c756aeddc3a977aabab111a094291" Mar 20 08:18:04 crc kubenswrapper[5136]: I0320 08:18:04.993664 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566578-ck7bv" Mar 20 08:18:05 crc kubenswrapper[5136]: I0320 08:18:05.346726 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:18:05 crc kubenswrapper[5136]: I0320 08:18:05.353474 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566572-59rnc"] Mar 20 08:18:06 crc kubenswrapper[5136]: I0320 08:18:06.408363 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e251183d-ffbf-414f-9d88-5830637722be" path="/var/lib/kubelet/pods/e251183d-ffbf-414f-9d88-5830637722be/volumes" Mar 20 08:18:15 crc kubenswrapper[5136]: I0320 08:18:15.821723 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:18:15 crc kubenswrapper[5136]: I0320 08:18:15.822035 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.977349 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:24 crc kubenswrapper[5136]: E0320 08:18:24.978319 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerName="oc" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.978336 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerName="oc" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.978554 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" containerName="oc" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.979712 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:24 crc kubenswrapper[5136]: I0320 08:18:24.993389 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.055011 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.055072 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.055178 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.156755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.156833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.156895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.157358 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.157393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.175702 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"redhat-operators-2whxt\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.307965 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:25 crc kubenswrapper[5136]: I0320 08:18:25.769263 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:26 crc kubenswrapper[5136]: I0320 08:18:26.143853 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d705e0-1d52-414a-95c1-d625388034ae" containerID="c0fe46c7f40beefa74bdd7879eb4d7de32376ed40fdd587487c7654069b3605e" exitCode=0 Mar 20 08:18:26 crc kubenswrapper[5136]: I0320 08:18:26.143938 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"c0fe46c7f40beefa74bdd7879eb4d7de32376ed40fdd587487c7654069b3605e"} Mar 20 08:18:26 crc kubenswrapper[5136]: I0320 08:18:26.144059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerStarted","Data":"868823b9edf432ae2a6b2585db0f683f4f0e4f95b09682fbd28eda38b3c71ee1"} Mar 20 08:18:27 crc kubenswrapper[5136]: I0320 08:18:27.151737 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerStarted","Data":"936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e"} Mar 20 08:18:28 crc kubenswrapper[5136]: I0320 08:18:28.160149 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d705e0-1d52-414a-95c1-d625388034ae" containerID="936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e" exitCode=0 Mar 20 08:18:28 crc kubenswrapper[5136]: I0320 08:18:28.160202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e"} Mar 20 08:18:29 crc kubenswrapper[5136]: I0320 08:18:29.168162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerStarted","Data":"b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f"} Mar 20 08:18:29 crc kubenswrapper[5136]: I0320 08:18:29.190187 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2whxt" podStartSLOduration=2.657079852 podStartE2EDuration="5.190168411s" podCreationTimestamp="2026-03-20 08:18:24 +0000 UTC" firstStartedPulling="2026-03-20 08:18:26.145809053 +0000 UTC m=+5338.405120204" lastFinishedPulling="2026-03-20 08:18:28.678897612 +0000 UTC m=+5340.938208763" observedRunningTime="2026-03-20 08:18:29.184677861 +0000 UTC m=+5341.443989012" watchObservedRunningTime="2026-03-20 08:18:29.190168411 +0000 UTC m=+5341.449479562" Mar 20 08:18:35 crc kubenswrapper[5136]: I0320 08:18:35.308337 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:35 crc kubenswrapper[5136]: I0320 08:18:35.308992 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:35 crc kubenswrapper[5136]: I0320 08:18:35.381442 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:36 crc kubenswrapper[5136]: I0320 08:18:36.287118 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:36 crc kubenswrapper[5136]: I0320 08:18:36.341515 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:36 crc kubenswrapper[5136]: I0320 08:18:36.655397 5136 scope.go:117] "RemoveContainer" containerID="b9689c90ff59dd42fc4279d62977b62b1e34234f2f912a119c0aaec47a889e16" Mar 20 08:18:38 crc kubenswrapper[5136]: I0320 08:18:38.250667 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2whxt" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" containerID="cri-o://b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f" gracePeriod=2 Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.263328 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d705e0-1d52-414a-95c1-d625388034ae" containerID="b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f" exitCode=0 Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.263384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f"} Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.735051 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.865960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") pod \"45d705e0-1d52-414a-95c1-d625388034ae\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.866102 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") pod \"45d705e0-1d52-414a-95c1-d625388034ae\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.866139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") pod \"45d705e0-1d52-414a-95c1-d625388034ae\" (UID: \"45d705e0-1d52-414a-95c1-d625388034ae\") " Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.867810 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities" (OuterVolumeSpecName: "utilities") pod "45d705e0-1d52-414a-95c1-d625388034ae" (UID: "45d705e0-1d52-414a-95c1-d625388034ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.871501 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5" (OuterVolumeSpecName: "kube-api-access-2rjt5") pod "45d705e0-1d52-414a-95c1-d625388034ae" (UID: "45d705e0-1d52-414a-95c1-d625388034ae"). InnerVolumeSpecName "kube-api-access-2rjt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.968054 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rjt5\" (UniqueName: \"kubernetes.io/projected/45d705e0-1d52-414a-95c1-d625388034ae-kube-api-access-2rjt5\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.968337 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:39 crc kubenswrapper[5136]: I0320 08:18:39.998477 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "45d705e0-1d52-414a-95c1-d625388034ae" (UID: "45d705e0-1d52-414a-95c1-d625388034ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.069564 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/45d705e0-1d52-414a-95c1-d625388034ae-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.274217 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2whxt" event={"ID":"45d705e0-1d52-414a-95c1-d625388034ae","Type":"ContainerDied","Data":"868823b9edf432ae2a6b2585db0f683f4f0e4f95b09682fbd28eda38b3c71ee1"} Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.274298 5136 scope.go:117] "RemoveContainer" containerID="b31adcbc2f2a9875bb6e8e8cadb976ad6fe39c73d6153ce09e6cdc804ea2ad6f" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.274426 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2whxt" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.297899 5136 scope.go:117] "RemoveContainer" containerID="936e4342d831aaf10446aba0c4a9e359a71632144ccd50a2b23b48530c6ba66e" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.323382 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.332912 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2whxt"] Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.338683 5136 scope.go:117] "RemoveContainer" containerID="c0fe46c7f40beefa74bdd7879eb4d7de32376ed40fdd587487c7654069b3605e" Mar 20 08:18:40 crc kubenswrapper[5136]: I0320 08:18:40.412730 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d705e0-1d52-414a-95c1-d625388034ae" path="/var/lib/kubelet/pods/45d705e0-1d52-414a-95c1-d625388034ae/volumes" Mar 20 08:18:45 crc kubenswrapper[5136]: I0320 08:18:45.822161 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:18:45 crc kubenswrapper[5136]: I0320 08:18:45.822751 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.821710 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.822447 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.822512 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.823390 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:19:15 crc kubenswrapper[5136]: I0320 08:19:15.823459 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" gracePeriod=600 Mar 20 08:19:15 crc kubenswrapper[5136]: E0320 08:19:15.956341 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.563602 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" exitCode=0 Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.563682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2"} Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.563742 5136 scope.go:117] "RemoveContainer" containerID="dd2ade7d5e861c64b69837aa7e42e6683017e086bd68cbfb02b7f2324fc9da14" Mar 20 08:19:16 crc kubenswrapper[5136]: I0320 08:19:16.564532 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:16 crc kubenswrapper[5136]: E0320 08:19:16.565029 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:28 crc kubenswrapper[5136]: I0320 08:19:28.408275 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:28 crc kubenswrapper[5136]: E0320 08:19:28.409025 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:41 crc kubenswrapper[5136]: I0320 08:19:41.396601 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:41 crc kubenswrapper[5136]: E0320 08:19:41.397617 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:19:53 crc kubenswrapper[5136]: I0320 08:19:53.397059 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:19:53 crc kubenswrapper[5136]: E0320 08:19:53.398062 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139121 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:20:00 crc kubenswrapper[5136]: E0320 08:20:00.139688 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-utilities" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139701 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-utilities" Mar 20 08:20:00 crc kubenswrapper[5136]: E0320 08:20:00.139724 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-content" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139731 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="extract-content" Mar 20 08:20:00 crc kubenswrapper[5136]: E0320 08:20:00.139744 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139751 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.139910 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d705e0-1d52-414a-95c1-d625388034ae" containerName="registry-server" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.140330 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.142556 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.142957 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.143189 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.147511 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"auto-csr-approver-29566580-gxgs4\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.150297 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.249092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"auto-csr-approver-29566580-gxgs4\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.265698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"auto-csr-approver-29566580-gxgs4\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.462804 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.891507 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.901636 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:20:00 crc kubenswrapper[5136]: I0320 08:20:00.916981 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" event={"ID":"c13adcfb-f420-46c9-bbde-3350b761780e","Type":"ContainerStarted","Data":"279516a51f639e814182eed36dc9ad81e44bcadd81cdd2484fd0d7eaccc139bf"} Mar 20 08:20:02 crc kubenswrapper[5136]: I0320 08:20:02.935514 5136 generic.go:334] "Generic (PLEG): container finished" podID="c13adcfb-f420-46c9-bbde-3350b761780e" containerID="49a3dc4dd1c8ed19d1dc39a8fdd22be54b526fe4de3c63ff0b1f4d1ea3e0a979" exitCode=0 Mar 20 08:20:02 crc kubenswrapper[5136]: I0320 08:20:02.935563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" event={"ID":"c13adcfb-f420-46c9-bbde-3350b761780e","Type":"ContainerDied","Data":"49a3dc4dd1c8ed19d1dc39a8fdd22be54b526fe4de3c63ff0b1f4d1ea3e0a979"} Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.242788 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.420435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") pod \"c13adcfb-f420-46c9-bbde-3350b761780e\" (UID: \"c13adcfb-f420-46c9-bbde-3350b761780e\") " Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.456012 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8" (OuterVolumeSpecName: "kube-api-access-tn9z8") pod "c13adcfb-f420-46c9-bbde-3350b761780e" (UID: "c13adcfb-f420-46c9-bbde-3350b761780e"). InnerVolumeSpecName "kube-api-access-tn9z8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.522435 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn9z8\" (UniqueName: \"kubernetes.io/projected/c13adcfb-f420-46c9-bbde-3350b761780e-kube-api-access-tn9z8\") on node \"crc\" DevicePath \"\"" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.958882 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" event={"ID":"c13adcfb-f420-46c9-bbde-3350b761780e","Type":"ContainerDied","Data":"279516a51f639e814182eed36dc9ad81e44bcadd81cdd2484fd0d7eaccc139bf"} Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.958931 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="279516a51f639e814182eed36dc9ad81e44bcadd81cdd2484fd0d7eaccc139bf" Mar 20 08:20:04 crc kubenswrapper[5136]: I0320 08:20:04.958988 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566580-gxgs4" Mar 20 08:20:05 crc kubenswrapper[5136]: I0320 08:20:05.311480 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:20:05 crc kubenswrapper[5136]: I0320 08:20:05.317259 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566574-kgksp"] Mar 20 08:20:06 crc kubenswrapper[5136]: I0320 08:20:06.405502 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa57e02b-5eb6-401e-997d-a451c285486e" path="/var/lib/kubelet/pods/aa57e02b-5eb6-401e-997d-a451c285486e/volumes" Mar 20 08:20:07 crc kubenswrapper[5136]: I0320 08:20:07.396941 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:07 crc kubenswrapper[5136]: E0320 08:20:07.397625 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:20 crc kubenswrapper[5136]: I0320 08:20:20.396334 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:20 crc kubenswrapper[5136]: E0320 08:20:20.396975 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:35 crc kubenswrapper[5136]: I0320 08:20:35.397978 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:35 crc kubenswrapper[5136]: E0320 08:20:35.399476 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:20:36 crc kubenswrapper[5136]: I0320 08:20:36.761588 5136 scope.go:117] "RemoveContainer" containerID="f86c6f9b4ef14a2e3961efc76d0057737a2a18ff9c282b67f83d05bca07f26b4" Mar 20 08:20:47 crc kubenswrapper[5136]: I0320 08:20:47.397188 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:20:47 crc kubenswrapper[5136]: E0320 08:20:47.398290 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:02 crc kubenswrapper[5136]: I0320 08:21:02.396659 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:02 crc kubenswrapper[5136]: E0320 08:21:02.397391 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:17 crc kubenswrapper[5136]: I0320 08:21:17.396092 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:17 crc kubenswrapper[5136]: E0320 08:21:17.396941 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:32 crc kubenswrapper[5136]: I0320 08:21:32.396887 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:32 crc kubenswrapper[5136]: E0320 08:21:32.397612 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:44 crc kubenswrapper[5136]: I0320 08:21:44.399611 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:44 crc kubenswrapper[5136]: E0320 08:21:44.400274 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.396556 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:21:57 crc kubenswrapper[5136]: E0320 08:21:57.397419 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.959085 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:21:57 crc kubenswrapper[5136]: E0320 08:21:57.959364 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" containerName="oc" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.959377 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" containerName="oc" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.959515 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" containerName="oc" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.960452 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:57 crc kubenswrapper[5136]: I0320 08:21:57.971867 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.058697 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.059124 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.059247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.160692 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.160787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.160938 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.161130 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.161508 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.179809 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"certified-operators-68w4f\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.285189 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.699771 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:21:58 crc kubenswrapper[5136]: I0320 08:21:58.819533 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerStarted","Data":"1f24178cc58c43551a444560f38eea1e86a4ab00ea0a51318dbd7c3bf67fbd4d"} Mar 20 08:21:59 crc kubenswrapper[5136]: I0320 08:21:59.832417 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2"} Mar 20 08:21:59 crc kubenswrapper[5136]: I0320 08:21:59.832445 5136 generic.go:334] "Generic (PLEG): container finished" podID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" exitCode=0 Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.155336 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.157589 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.160094 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.160322 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.160519 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.163699 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.194099 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"auto-csr-approver-29566582-wv2dn\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.295414 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"auto-csr-approver-29566582-wv2dn\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.312925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"auto-csr-approver-29566582-wv2dn\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.488496 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.845062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerStarted","Data":"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad"} Mar 20 08:22:00 crc kubenswrapper[5136]: I0320 08:22:00.940646 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:22:00 crc kubenswrapper[5136]: W0320 08:22:00.992558 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36fd17ca_2655_4388_807a_3740ab031402.slice/crio-ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce WatchSource:0}: Error finding container ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce: Status 404 returned error can't find the container with id ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce Mar 20 08:22:01 crc kubenswrapper[5136]: I0320 08:22:01.854623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" event={"ID":"36fd17ca-2655-4388-807a-3740ab031402","Type":"ContainerStarted","Data":"ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce"} Mar 20 08:22:01 crc kubenswrapper[5136]: I0320 08:22:01.857299 5136 generic.go:334] "Generic (PLEG): container finished" podID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" exitCode=0 Mar 20 08:22:01 crc kubenswrapper[5136]: I0320 08:22:01.857336 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad"} Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.869204 5136 generic.go:334] "Generic (PLEG): container finished" podID="36fd17ca-2655-4388-807a-3740ab031402" containerID="a529e171ce4e400e77421cdd13032062bdb0c3099972bc7c31cdc0391d1d0584" exitCode=0 Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.869306 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" event={"ID":"36fd17ca-2655-4388-807a-3740ab031402","Type":"ContainerDied","Data":"a529e171ce4e400e77421cdd13032062bdb0c3099972bc7c31cdc0391d1d0584"} Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.874389 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerStarted","Data":"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda"} Mar 20 08:22:02 crc kubenswrapper[5136]: I0320 08:22:02.918510 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-68w4f" podStartSLOduration=3.469183959 podStartE2EDuration="5.918485243s" podCreationTimestamp="2026-03-20 08:21:57 +0000 UTC" firstStartedPulling="2026-03-20 08:21:59.834007026 +0000 UTC m=+5552.093318177" lastFinishedPulling="2026-03-20 08:22:02.28330827 +0000 UTC m=+5554.542619461" observedRunningTime="2026-03-20 08:22:02.914643716 +0000 UTC m=+5555.173954887" watchObservedRunningTime="2026-03-20 08:22:02.918485243 +0000 UTC m=+5555.177796404" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.141381 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.248715 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") pod \"36fd17ca-2655-4388-807a-3740ab031402\" (UID: \"36fd17ca-2655-4388-807a-3740ab031402\") " Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.257600 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw" (OuterVolumeSpecName: "kube-api-access-854lw") pod "36fd17ca-2655-4388-807a-3740ab031402" (UID: "36fd17ca-2655-4388-807a-3740ab031402"). InnerVolumeSpecName "kube-api-access-854lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.350510 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-854lw\" (UniqueName: \"kubernetes.io/projected/36fd17ca-2655-4388-807a-3740ab031402-kube-api-access-854lw\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.889194 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" event={"ID":"36fd17ca-2655-4388-807a-3740ab031402","Type":"ContainerDied","Data":"ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce"} Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.889239 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef659d7493c6b730fe84e187621f4cafb7209c6e0e29d6e9a2683e60d4f167ce" Mar 20 08:22:04 crc kubenswrapper[5136]: I0320 08:22:04.889239 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566582-wv2dn" Mar 20 08:22:05 crc kubenswrapper[5136]: I0320 08:22:05.203548 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:22:05 crc kubenswrapper[5136]: I0320 08:22:05.209980 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566576-lnp57"] Mar 20 08:22:06 crc kubenswrapper[5136]: I0320 08:22:06.404147 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc65bb98-68d8-471f-82de-50eba3ccfd7d" path="/var/lib/kubelet/pods/cc65bb98-68d8-471f-82de-50eba3ccfd7d/volumes" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.286389 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.286507 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.371080 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:08 crc kubenswrapper[5136]: I0320 08:22:08.961767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:09 crc kubenswrapper[5136]: I0320 08:22:09.396599 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:09 crc kubenswrapper[5136]: E0320 08:22:09.396835 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:22:09 crc kubenswrapper[5136]: I0320 08:22:09.830371 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:22:10 crc kubenswrapper[5136]: I0320 08:22:10.929336 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-68w4f" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" containerID="cri-o://9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" gracePeriod=2 Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.277971 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.349912 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") pod \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.350008 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") pod \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.350095 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") pod \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\" (UID: \"080d55d0-394e-46a8-a6b9-7e6b7c5759de\") " Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.351302 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities" (OuterVolumeSpecName: "utilities") pod "080d55d0-394e-46a8-a6b9-7e6b7c5759de" (UID: "080d55d0-394e-46a8-a6b9-7e6b7c5759de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.355724 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4" (OuterVolumeSpecName: "kube-api-access-x9jp4") pod "080d55d0-394e-46a8-a6b9-7e6b7c5759de" (UID: "080d55d0-394e-46a8-a6b9-7e6b7c5759de"). InnerVolumeSpecName "kube-api-access-x9jp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.452027 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9jp4\" (UniqueName: \"kubernetes.io/projected/080d55d0-394e-46a8-a6b9-7e6b7c5759de-kube-api-access-x9jp4\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.452055 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.639201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "080d55d0-394e-46a8-a6b9-7e6b7c5759de" (UID: "080d55d0-394e-46a8-a6b9-7e6b7c5759de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.653620 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/080d55d0-394e-46a8-a6b9-7e6b7c5759de-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939075 5136 generic.go:334] "Generic (PLEG): container finished" podID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" exitCode=0 Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939136 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda"} Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939186 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-68w4f" event={"ID":"080d55d0-394e-46a8-a6b9-7e6b7c5759de","Type":"ContainerDied","Data":"1f24178cc58c43551a444560f38eea1e86a4ab00ea0a51318dbd7c3bf67fbd4d"} Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939214 5136 scope.go:117] "RemoveContainer" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.939237 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-68w4f" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.977183 5136 scope.go:117] "RemoveContainer" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.980224 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.987636 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-68w4f"] Mar 20 08:22:11 crc kubenswrapper[5136]: I0320 08:22:11.996784 5136 scope.go:117] "RemoveContainer" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.017386 5136 scope.go:117] "RemoveContainer" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" Mar 20 08:22:12 crc kubenswrapper[5136]: E0320 08:22:12.017887 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda\": container with ID starting with 9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda not found: ID does not exist" containerID="9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.017960 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda"} err="failed to get container status \"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda\": rpc error: code = NotFound desc = could not find container \"9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda\": container with ID starting with 9c7d8b7da892de33606b657d3429b455fd7826f8caa67336575986a55ddb6dda not found: ID does not exist" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018005 5136 scope.go:117] "RemoveContainer" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" Mar 20 08:22:12 crc kubenswrapper[5136]: E0320 08:22:12.018313 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad\": container with ID starting with 23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad not found: ID does not exist" containerID="23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018343 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad"} err="failed to get container status \"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad\": rpc error: code = NotFound desc = could not find container \"23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad\": container with ID starting with 23479181b35cbc8d9ce2825b56a41c428cab8e7f3cad92e99bccdb39b902baad not found: ID does not exist" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018374 5136 scope.go:117] "RemoveContainer" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" Mar 20 08:22:12 crc kubenswrapper[5136]: E0320 08:22:12.018903 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2\": container with ID starting with 54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2 not found: ID does not exist" containerID="54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.018947 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2"} err="failed to get container status \"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2\": rpc error: code = NotFound desc = could not find container \"54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2\": container with ID starting with 54f55647f8c1f36f63f1a2690a773b5fbd29005e3d797369620f4b4ca80730a2 not found: ID does not exist" Mar 20 08:22:12 crc kubenswrapper[5136]: I0320 08:22:12.407440 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" path="/var/lib/kubelet/pods/080d55d0-394e-46a8-a6b9-7e6b7c5759de/volumes" Mar 20 08:22:24 crc kubenswrapper[5136]: I0320 08:22:24.397171 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:24 crc kubenswrapper[5136]: E0320 08:22:24.398128 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:22:36 crc kubenswrapper[5136]: I0320 08:22:36.833622 5136 scope.go:117] "RemoveContainer" containerID="62cda2bd643833622def1d9629824c17e3df9ab31f50b1b04bb053644f55653c" Mar 20 08:22:38 crc kubenswrapper[5136]: I0320 08:22:38.400750 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:38 crc kubenswrapper[5136]: E0320 08:22:38.401242 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:22:50 crc kubenswrapper[5136]: I0320 08:22:50.397298 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:22:50 crc kubenswrapper[5136]: E0320 08:22:50.398533 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:02 crc kubenswrapper[5136]: I0320 08:23:02.397043 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:02 crc kubenswrapper[5136]: E0320 08:23:02.398423 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:14 crc kubenswrapper[5136]: I0320 08:23:14.396501 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:14 crc kubenswrapper[5136]: E0320 08:23:14.397619 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:26 crc kubenswrapper[5136]: I0320 08:23:26.396570 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:26 crc kubenswrapper[5136]: E0320 08:23:26.397425 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:40 crc kubenswrapper[5136]: I0320 08:23:40.398739 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:40 crc kubenswrapper[5136]: E0320 08:23:40.399483 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:23:52 crc kubenswrapper[5136]: I0320 08:23:52.396519 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:23:52 crc kubenswrapper[5136]: E0320 08:23:52.397326 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.171286 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172750 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36fd17ca-2655-4388-807a-3740ab031402" containerName="oc" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172782 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="36fd17ca-2655-4388-807a-3740ab031402" containerName="oc" Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172860 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-content" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172882 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-content" Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172913 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-utilities" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172933 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="extract-utilities" Mar 20 08:24:00 crc kubenswrapper[5136]: E0320 08:24:00.172958 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.172974 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.173301 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="080d55d0-394e-46a8-a6b9-7e6b7c5759de" containerName="registry-server" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.173367 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="36fd17ca-2655-4388-807a-3740ab031402" containerName="oc" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.174365 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.178785 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.179260 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.179459 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.186341 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.294527 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"auto-csr-approver-29566584-ht4pj\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.396832 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"auto-csr-approver-29566584-ht4pj\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.416982 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"auto-csr-approver-29566584-ht4pj\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.505862 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:00 crc kubenswrapper[5136]: I0320 08:24:00.979402 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:24:00 crc kubenswrapper[5136]: W0320 08:24:00.983359 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49bc2e6c_77f7_42f1_ba1e_86bbc6bdc2d2.slice/crio-0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c WatchSource:0}: Error finding container 0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c: Status 404 returned error can't find the container with id 0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c Mar 20 08:24:01 crc kubenswrapper[5136]: I0320 08:24:01.779493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" event={"ID":"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2","Type":"ContainerStarted","Data":"0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c"} Mar 20 08:24:02 crc kubenswrapper[5136]: I0320 08:24:02.786468 5136 generic.go:334] "Generic (PLEG): container finished" podID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerID="b6e56033203d796df41b39eddfb04e55cd2822f9ba7f0e9edd26141d7d5d92b3" exitCode=0 Mar 20 08:24:02 crc kubenswrapper[5136]: I0320 08:24:02.786515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" event={"ID":"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2","Type":"ContainerDied","Data":"b6e56033203d796df41b39eddfb04e55cd2822f9ba7f0e9edd26141d7d5d92b3"} Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.081930 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.251025 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") pod \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\" (UID: \"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2\") " Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.256617 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q" (OuterVolumeSpecName: "kube-api-access-nsr2q") pod "49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" (UID: "49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2"). InnerVolumeSpecName "kube-api-access-nsr2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.352404 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsr2q\" (UniqueName: \"kubernetes.io/projected/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2-kube-api-access-nsr2q\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.801451 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" event={"ID":"49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2","Type":"ContainerDied","Data":"0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c"} Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.801757 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0552be7161fcc1ae98bc979f3aec92a796b808ade9df30c38c59b5488c33dc4c" Mar 20 08:24:04 crc kubenswrapper[5136]: I0320 08:24:04.801496 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566584-ht4pj" Mar 20 08:24:05 crc kubenswrapper[5136]: I0320 08:24:05.146130 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:24:05 crc kubenswrapper[5136]: I0320 08:24:05.151135 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566578-ck7bv"] Mar 20 08:24:06 crc kubenswrapper[5136]: I0320 08:24:06.396269 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:24:06 crc kubenswrapper[5136]: E0320 08:24:06.396481 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:24:06 crc kubenswrapper[5136]: I0320 08:24:06.416612 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e55d749f-3c3e-4558-bf74-28a388d382bf" path="/var/lib/kubelet/pods/e55d749f-3c3e-4558-bf74-28a388d382bf/volumes" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.397009 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.867386 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:19 crc kubenswrapper[5136]: E0320 08:24:19.868576 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerName="oc" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.868672 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerName="oc" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.868936 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" containerName="oc" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.870238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.883025 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.927251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb"} Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.979888 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.979977 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:19 crc kubenswrapper[5136]: I0320 08:24:19.980043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082192 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082309 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082357 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.082828 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.084213 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.107351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"community-operators-m8dtk\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.198148 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.537479 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.937428 5136 generic.go:334] "Generic (PLEG): container finished" podID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" exitCode=0 Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.937475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a"} Mar 20 08:24:20 crc kubenswrapper[5136]: I0320 08:24:20.937509 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerStarted","Data":"fd56c1fcd17605cb687c5c5f760a499825e07465feb19ffb8f74453066691848"} Mar 20 08:24:21 crc kubenswrapper[5136]: I0320 08:24:21.946855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerStarted","Data":"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3"} Mar 20 08:24:22 crc kubenswrapper[5136]: I0320 08:24:22.955580 5136 generic.go:334] "Generic (PLEG): container finished" podID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" exitCode=0 Mar 20 08:24:22 crc kubenswrapper[5136]: I0320 08:24:22.955620 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3"} Mar 20 08:24:23 crc kubenswrapper[5136]: I0320 08:24:23.966300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerStarted","Data":"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462"} Mar 20 08:24:23 crc kubenswrapper[5136]: I0320 08:24:23.989933 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m8dtk" podStartSLOduration=2.600853157 podStartE2EDuration="4.98989465s" podCreationTimestamp="2026-03-20 08:24:19 +0000 UTC" firstStartedPulling="2026-03-20 08:24:20.939087742 +0000 UTC m=+5693.198398913" lastFinishedPulling="2026-03-20 08:24:23.328129235 +0000 UTC m=+5695.587440406" observedRunningTime="2026-03-20 08:24:23.986110362 +0000 UTC m=+5696.245421513" watchObservedRunningTime="2026-03-20 08:24:23.98989465 +0000 UTC m=+5696.249205801" Mar 20 08:24:30 crc kubenswrapper[5136]: I0320 08:24:30.198594 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:30 crc kubenswrapper[5136]: I0320 08:24:30.199405 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:30 crc kubenswrapper[5136]: I0320 08:24:30.257387 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:31 crc kubenswrapper[5136]: I0320 08:24:31.081611 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:31 crc kubenswrapper[5136]: I0320 08:24:31.135269 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.028248 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m8dtk" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" containerID="cri-o://125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" gracePeriod=2 Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.429311 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.474464 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") pod \"db0a0224-28cb-4c0b-9679-87af0cc13cee\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.474847 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") pod \"db0a0224-28cb-4c0b-9679-87af0cc13cee\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.474933 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") pod \"db0a0224-28cb-4c0b-9679-87af0cc13cee\" (UID: \"db0a0224-28cb-4c0b-9679-87af0cc13cee\") " Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.475450 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities" (OuterVolumeSpecName: "utilities") pod "db0a0224-28cb-4c0b-9679-87af0cc13cee" (UID: "db0a0224-28cb-4c0b-9679-87af0cc13cee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.481108 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2" (OuterVolumeSpecName: "kube-api-access-bs4k2") pod "db0a0224-28cb-4c0b-9679-87af0cc13cee" (UID: "db0a0224-28cb-4c0b-9679-87af0cc13cee"). InnerVolumeSpecName "kube-api-access-bs4k2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.524289 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db0a0224-28cb-4c0b-9679-87af0cc13cee" (UID: "db0a0224-28cb-4c0b-9679-87af0cc13cee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.575783 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs4k2\" (UniqueName: \"kubernetes.io/projected/db0a0224-28cb-4c0b-9679-87af0cc13cee-kube-api-access-bs4k2\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.575861 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:33 crc kubenswrapper[5136]: I0320 08:24:33.575872 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db0a0224-28cb-4c0b-9679-87af0cc13cee-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035522 5136 generic.go:334] "Generic (PLEG): container finished" podID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" exitCode=0 Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462"} Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035597 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m8dtk" event={"ID":"db0a0224-28cb-4c0b-9679-87af0cc13cee","Type":"ContainerDied","Data":"fd56c1fcd17605cb687c5c5f760a499825e07465feb19ffb8f74453066691848"} Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035609 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m8dtk" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.035614 5136 scope.go:117] "RemoveContainer" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.057079 5136 scope.go:117] "RemoveContainer" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.063726 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.070269 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m8dtk"] Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.084967 5136 scope.go:117] "RemoveContainer" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.113170 5136 scope.go:117] "RemoveContainer" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" Mar 20 08:24:34 crc kubenswrapper[5136]: E0320 08:24:34.113657 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462\": container with ID starting with 125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462 not found: ID does not exist" containerID="125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.113709 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462"} err="failed to get container status \"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462\": rpc error: code = NotFound desc = could not find container \"125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462\": container with ID starting with 125b2220d8aebdbd9b742cea6c46c10b0f8a7909b2486c638a6c4120288f8462 not found: ID does not exist" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.113745 5136 scope.go:117] "RemoveContainer" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" Mar 20 08:24:34 crc kubenswrapper[5136]: E0320 08:24:34.114247 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3\": container with ID starting with f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3 not found: ID does not exist" containerID="f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.114308 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3"} err="failed to get container status \"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3\": rpc error: code = NotFound desc = could not find container \"f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3\": container with ID starting with f10ce8656bd251519d016b531c9028809c4b85f6c72d56c8978b275fe74f12d3 not found: ID does not exist" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.114336 5136 scope.go:117] "RemoveContainer" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" Mar 20 08:24:34 crc kubenswrapper[5136]: E0320 08:24:34.114765 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a\": container with ID starting with 26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a not found: ID does not exist" containerID="26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.114806 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a"} err="failed to get container status \"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a\": rpc error: code = NotFound desc = could not find container \"26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a\": container with ID starting with 26a330df72e29eda8ad29d170bd8dcc67c0a58c533c510ee07ebe50c6213489a not found: ID does not exist" Mar 20 08:24:34 crc kubenswrapper[5136]: I0320 08:24:34.404758 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" path="/var/lib/kubelet/pods/db0a0224-28cb-4c0b-9679-87af0cc13cee/volumes" Mar 20 08:24:36 crc kubenswrapper[5136]: I0320 08:24:36.923589 5136 scope.go:117] "RemoveContainer" containerID="4e591fa4a3dca1b218c4cdb5e1a7771e6b255424f6feef9dba08785df4cee785" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.266587 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:46 crc kubenswrapper[5136]: E0320 08:25:46.267959 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.267988 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" Mar 20 08:25:46 crc kubenswrapper[5136]: E0320 08:25:46.268009 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-utilities" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.268022 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-utilities" Mar 20 08:25:46 crc kubenswrapper[5136]: E0320 08:25:46.268051 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-content" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.268065 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="extract-content" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.268311 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="db0a0224-28cb-4c0b-9679-87af0cc13cee" containerName="registry-server" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.270093 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.280128 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.433413 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.433581 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.433849 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535222 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535331 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.535987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.561740 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"redhat-marketplace-crbpt\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.588622 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:46 crc kubenswrapper[5136]: I0320 08:25:46.875742 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.650004 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" exitCode=0 Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.650060 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5"} Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.650098 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerStarted","Data":"a31991e44a39a5ee2d3f1743e87bc7b37d9628d9071253ddeec5aaddbef70cec"} Mar 20 08:25:47 crc kubenswrapper[5136]: I0320 08:25:47.652628 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:25:49 crc kubenswrapper[5136]: I0320 08:25:49.664847 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" exitCode=0 Mar 20 08:25:49 crc kubenswrapper[5136]: I0320 08:25:49.664954 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4"} Mar 20 08:25:50 crc kubenswrapper[5136]: I0320 08:25:50.673525 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerStarted","Data":"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934"} Mar 20 08:25:50 crc kubenswrapper[5136]: I0320 08:25:50.701564 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crbpt" podStartSLOduration=2.187998989 podStartE2EDuration="4.701544435s" podCreationTimestamp="2026-03-20 08:25:46 +0000 UTC" firstStartedPulling="2026-03-20 08:25:47.652390898 +0000 UTC m=+5779.911702049" lastFinishedPulling="2026-03-20 08:25:50.165936344 +0000 UTC m=+5782.425247495" observedRunningTime="2026-03-20 08:25:50.692905898 +0000 UTC m=+5782.952217059" watchObservedRunningTime="2026-03-20 08:25:50.701544435 +0000 UTC m=+5782.960855596" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.589224 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.589282 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.651447 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:56 crc kubenswrapper[5136]: I0320 08:25:56.773667 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:57 crc kubenswrapper[5136]: I0320 08:25:57.640050 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:58 crc kubenswrapper[5136]: I0320 08:25:58.752232 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crbpt" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" containerID="cri-o://00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" gracePeriod=2 Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.220800 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.408311 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") pod \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.408363 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") pod \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.408420 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") pod \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\" (UID: \"86a4a5b4-ac25-409f-8ba9-e393aef21d43\") " Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.410676 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities" (OuterVolumeSpecName: "utilities") pod "86a4a5b4-ac25-409f-8ba9-e393aef21d43" (UID: "86a4a5b4-ac25-409f-8ba9-e393aef21d43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.416241 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj" (OuterVolumeSpecName: "kube-api-access-lwfmj") pod "86a4a5b4-ac25-409f-8ba9-e393aef21d43" (UID: "86a4a5b4-ac25-409f-8ba9-e393aef21d43"). InnerVolumeSpecName "kube-api-access-lwfmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.435967 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86a4a5b4-ac25-409f-8ba9-e393aef21d43" (UID: "86a4a5b4-ac25-409f-8ba9-e393aef21d43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.509519 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lwfmj\" (UniqueName: \"kubernetes.io/projected/86a4a5b4-ac25-409f-8ba9-e393aef21d43-kube-api-access-lwfmj\") on node \"crc\" DevicePath \"\"" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.509555 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.509567 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86a4a5b4-ac25-409f-8ba9-e393aef21d43-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.763314 5136 generic.go:334] "Generic (PLEG): container finished" podID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" exitCode=0 Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.763967 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934"} Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.764563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crbpt" event={"ID":"86a4a5b4-ac25-409f-8ba9-e393aef21d43","Type":"ContainerDied","Data":"a31991e44a39a5ee2d3f1743e87bc7b37d9628d9071253ddeec5aaddbef70cec"} Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.764663 5136 scope.go:117] "RemoveContainer" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.764099 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crbpt" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.796175 5136 scope.go:117] "RemoveContainer" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.801222 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.809044 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crbpt"] Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.832193 5136 scope.go:117] "RemoveContainer" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.851493 5136 scope.go:117] "RemoveContainer" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" Mar 20 08:25:59 crc kubenswrapper[5136]: E0320 08:25:59.853427 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934\": container with ID starting with 00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934 not found: ID does not exist" containerID="00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853473 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934"} err="failed to get container status \"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934\": rpc error: code = NotFound desc = could not find container \"00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934\": container with ID starting with 00e0d8a0357bd3a77dd3d16818eed4043793708a4733f4c34b5b7f7d91bbd934 not found: ID does not exist" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853501 5136 scope.go:117] "RemoveContainer" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" Mar 20 08:25:59 crc kubenswrapper[5136]: E0320 08:25:59.853853 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4\": container with ID starting with fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4 not found: ID does not exist" containerID="fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853886 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4"} err="failed to get container status \"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4\": rpc error: code = NotFound desc = could not find container \"fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4\": container with ID starting with fed099b9188f196807029456bc7dc890a4330be0f7a578f5dbe9cd80bb8964b4 not found: ID does not exist" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.853906 5136 scope.go:117] "RemoveContainer" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" Mar 20 08:25:59 crc kubenswrapper[5136]: E0320 08:25:59.854157 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5\": container with ID starting with 67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5 not found: ID does not exist" containerID="67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5" Mar 20 08:25:59 crc kubenswrapper[5136]: I0320 08:25:59.854193 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5"} err="failed to get container status \"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5\": rpc error: code = NotFound desc = could not find container \"67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5\": container with ID starting with 67bbb7add8b9022daaf3503aeab8d7b776026890ae0b4ec7c7d3286c341b3cd5 not found: ID does not exist" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.151202 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:26:00 crc kubenswrapper[5136]: E0320 08:26:00.152299 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152341 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" Mar 20 08:26:00 crc kubenswrapper[5136]: E0320 08:26:00.152392 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-utilities" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152414 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-utilities" Mar 20 08:26:00 crc kubenswrapper[5136]: E0320 08:26:00.152453 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-content" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152474 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="extract-content" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.152899 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" containerName="registry-server" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.155747 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.157895 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.161282 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.192059 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.192304 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.219145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"auto-csr-approver-29566586-tq64l\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.320918 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"auto-csr-approver-29566586-tq64l\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.337356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"auto-csr-approver-29566586-tq64l\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.408845 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86a4a5b4-ac25-409f-8ba9-e393aef21d43" path="/var/lib/kubelet/pods/86a4a5b4-ac25-409f-8ba9-e393aef21d43/volumes" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.527466 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:00 crc kubenswrapper[5136]: I0320 08:26:00.993488 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:26:01 crc kubenswrapper[5136]: I0320 08:26:01.791138 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566586-tq64l" event={"ID":"201194d4-8f03-49d4-bf30-d69ece3e6d30","Type":"ContainerStarted","Data":"52cf5df512fc383d0f94258aea02d8aa8ccd6adae53161d3e085276f511cdb34"} Mar 20 08:26:02 crc kubenswrapper[5136]: I0320 08:26:02.799329 5136 generic.go:334] "Generic (PLEG): container finished" podID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerID="53356b00d0884cc08ef3105861c0ae9d4bfaf917f6a2b9dfbe1bccff6dec5b55" exitCode=0 Mar 20 08:26:02 crc kubenswrapper[5136]: I0320 08:26:02.799373 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566586-tq64l" event={"ID":"201194d4-8f03-49d4-bf30-d69ece3e6d30","Type":"ContainerDied","Data":"53356b00d0884cc08ef3105861c0ae9d4bfaf917f6a2b9dfbe1bccff6dec5b55"} Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.056329 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.077158 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") pod \"201194d4-8f03-49d4-bf30-d69ece3e6d30\" (UID: \"201194d4-8f03-49d4-bf30-d69ece3e6d30\") " Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.085663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz" (OuterVolumeSpecName: "kube-api-access-64nrz") pod "201194d4-8f03-49d4-bf30-d69ece3e6d30" (UID: "201194d4-8f03-49d4-bf30-d69ece3e6d30"). InnerVolumeSpecName "kube-api-access-64nrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.177928 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64nrz\" (UniqueName: \"kubernetes.io/projected/201194d4-8f03-49d4-bf30-d69ece3e6d30-kube-api-access-64nrz\") on node \"crc\" DevicePath \"\"" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.825780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566586-tq64l" event={"ID":"201194d4-8f03-49d4-bf30-d69ece3e6d30","Type":"ContainerDied","Data":"52cf5df512fc383d0f94258aea02d8aa8ccd6adae53161d3e085276f511cdb34"} Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.826266 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52cf5df512fc383d0f94258aea02d8aa8ccd6adae53161d3e085276f511cdb34" Mar 20 08:26:04 crc kubenswrapper[5136]: I0320 08:26:04.825925 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566586-tq64l" Mar 20 08:26:05 crc kubenswrapper[5136]: I0320 08:26:05.121154 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:26:05 crc kubenswrapper[5136]: I0320 08:26:05.125781 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566580-gxgs4"] Mar 20 08:26:06 crc kubenswrapper[5136]: I0320 08:26:06.407855 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13adcfb-f420-46c9-bbde-3350b761780e" path="/var/lib/kubelet/pods/c13adcfb-f420-46c9-bbde-3350b761780e/volumes" Mar 20 08:26:37 crc kubenswrapper[5136]: I0320 08:26:37.015158 5136 scope.go:117] "RemoveContainer" containerID="49a3dc4dd1c8ed19d1dc39a8fdd22be54b526fe4de3c63ff0b1f4d1ea3e0a979" Mar 20 08:26:45 crc kubenswrapper[5136]: I0320 08:26:45.822085 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:26:45 crc kubenswrapper[5136]: I0320 08:26:45.822883 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:27:15 crc kubenswrapper[5136]: I0320 08:27:15.822053 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:27:15 crc kubenswrapper[5136]: I0320 08:27:15.822956 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.822245 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.822974 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.823046 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.824056 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:27:45 crc kubenswrapper[5136]: I0320 08:27:45.824147 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb" gracePeriod=600 Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.650344 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb" exitCode=0 Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.650423 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb"} Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.651024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5"} Mar 20 08:27:46 crc kubenswrapper[5136]: I0320 08:27:46.651050 5136 scope.go:117] "RemoveContainer" containerID="4d856d14b59bf6d2497000228bd7ba4556c307f1cf5f4b33ce1d1e8c730027f2" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.152988 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:28:00 crc kubenswrapper[5136]: E0320 08:28:00.154298 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerName="oc" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.154320 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerName="oc" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.154546 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" containerName="oc" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.155204 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.162279 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.162550 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.164135 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.177682 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.266542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"auto-csr-approver-29566588-jjbzp\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.368957 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"auto-csr-approver-29566588-jjbzp\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.394064 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"auto-csr-approver-29566588-jjbzp\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:00 crc kubenswrapper[5136]: I0320 08:28:00.497167 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:01 crc kubenswrapper[5136]: I0320 08:28:01.006882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:28:01 crc kubenswrapper[5136]: W0320 08:28:01.011181 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca2b685a_cbe9_4989_87d2_09c8c1b3a846.slice/crio-6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313 WatchSource:0}: Error finding container 6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313: Status 404 returned error can't find the container with id 6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313 Mar 20 08:28:01 crc kubenswrapper[5136]: I0320 08:28:01.772884 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" event={"ID":"ca2b685a-cbe9-4989-87d2-09c8c1b3a846","Type":"ContainerStarted","Data":"6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313"} Mar 20 08:28:02 crc kubenswrapper[5136]: I0320 08:28:02.779906 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerID="0bdf2244928c50e418739f666f637d0c122d85d20e0278df3b68b937bca89d79" exitCode=0 Mar 20 08:28:02 crc kubenswrapper[5136]: I0320 08:28:02.779958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" event={"ID":"ca2b685a-cbe9-4989-87d2-09c8c1b3a846","Type":"ContainerDied","Data":"0bdf2244928c50e418739f666f637d0c122d85d20e0278df3b68b937bca89d79"} Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.102699 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.120917 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") pod \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\" (UID: \"ca2b685a-cbe9-4989-87d2-09c8c1b3a846\") " Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.130033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz" (OuterVolumeSpecName: "kube-api-access-dstqz") pod "ca2b685a-cbe9-4989-87d2-09c8c1b3a846" (UID: "ca2b685a-cbe9-4989-87d2-09c8c1b3a846"). InnerVolumeSpecName "kube-api-access-dstqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.223371 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dstqz\" (UniqueName: \"kubernetes.io/projected/ca2b685a-cbe9-4989-87d2-09c8c1b3a846-kube-api-access-dstqz\") on node \"crc\" DevicePath \"\"" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.802405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" event={"ID":"ca2b685a-cbe9-4989-87d2-09c8c1b3a846","Type":"ContainerDied","Data":"6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313"} Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.802444 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e594069c9281d1350197c7f6cb507a3383788799464d74dfcd291283ac6e313" Mar 20 08:28:04 crc kubenswrapper[5136]: I0320 08:28:04.802560 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566588-jjbzp" Mar 20 08:28:05 crc kubenswrapper[5136]: I0320 08:28:05.188010 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:28:05 crc kubenswrapper[5136]: I0320 08:28:05.192337 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566582-wv2dn"] Mar 20 08:28:06 crc kubenswrapper[5136]: I0320 08:28:06.407261 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36fd17ca-2655-4388-807a-3740ab031402" path="/var/lib/kubelet/pods/36fd17ca-2655-4388-807a-3740ab031402/volumes" Mar 20 08:28:37 crc kubenswrapper[5136]: I0320 08:28:37.152889 5136 scope.go:117] "RemoveContainer" containerID="a529e171ce4e400e77421cdd13032062bdb0c3099972bc7c31cdc0391d1d0584" Mar 20 08:29:43 crc kubenswrapper[5136]: I0320 08:29:43.861167 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 08:29:43 crc kubenswrapper[5136]: I0320 08:29:43.869645 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-jqplr"] Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.009528 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:44 crc kubenswrapper[5136]: E0320 08:29:44.009930 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerName="oc" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.009953 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerName="oc" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.010093 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" containerName="oc" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.010735 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.012939 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.013141 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.013438 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.013681 5136 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-65jln" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.014623 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.197171 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.197231 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.197271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.298996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.299933 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.323841 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"crc-storage-crc-5cjkp\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.327602 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.409183 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="868b5502-6c3e-4e3b-bc43-c0875e71512f" path="/var/lib/kubelet/pods/868b5502-6c3e-4e3b-bc43-c0875e71512f/volumes" Mar 20 08:29:44 crc kubenswrapper[5136]: I0320 08:29:44.798338 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:45 crc kubenswrapper[5136]: I0320 08:29:45.604968 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5cjkp" event={"ID":"13b3d6a1-236a-4eec-8755-d6673a652114","Type":"ContainerStarted","Data":"bcf9c40c06591cfa7b6dd8c1bc0c60d668bc51fbc72a7b0277f617cbeb8adae7"} Mar 20 08:29:46 crc kubenswrapper[5136]: I0320 08:29:46.611832 5136 generic.go:334] "Generic (PLEG): container finished" podID="13b3d6a1-236a-4eec-8755-d6673a652114" containerID="a84d841fa14dbb7d163049ae2a42d3d241fc2e9ace22731699a4238f410674cb" exitCode=0 Mar 20 08:29:46 crc kubenswrapper[5136]: I0320 08:29:46.611871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5cjkp" event={"ID":"13b3d6a1-236a-4eec-8755-d6673a652114","Type":"ContainerDied","Data":"a84d841fa14dbb7d163049ae2a42d3d241fc2e9ace22731699a4238f410674cb"} Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.025168 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.152985 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") pod \"13b3d6a1-236a-4eec-8755-d6673a652114\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153295 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") pod \"13b3d6a1-236a-4eec-8755-d6673a652114\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153385 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") pod \"13b3d6a1-236a-4eec-8755-d6673a652114\" (UID: \"13b3d6a1-236a-4eec-8755-d6673a652114\") " Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153399 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "13b3d6a1-236a-4eec-8755-d6673a652114" (UID: "13b3d6a1-236a-4eec-8755-d6673a652114"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.153713 5136 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/13b3d6a1-236a-4eec-8755-d6673a652114-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.159157 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf" (OuterVolumeSpecName: "kube-api-access-m8wpf") pod "13b3d6a1-236a-4eec-8755-d6673a652114" (UID: "13b3d6a1-236a-4eec-8755-d6673a652114"). InnerVolumeSpecName "kube-api-access-m8wpf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.178653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "13b3d6a1-236a-4eec-8755-d6673a652114" (UID: "13b3d6a1-236a-4eec-8755-d6673a652114"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.254728 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8wpf\" (UniqueName: \"kubernetes.io/projected/13b3d6a1-236a-4eec-8755-d6673a652114-kube-api-access-m8wpf\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.254754 5136 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/13b3d6a1-236a-4eec-8755-d6673a652114-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.628885 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-5cjkp" event={"ID":"13b3d6a1-236a-4eec-8755-d6673a652114","Type":"ContainerDied","Data":"bcf9c40c06591cfa7b6dd8c1bc0c60d668bc51fbc72a7b0277f617cbeb8adae7"} Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.628919 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcf9c40c06591cfa7b6dd8c1bc0c60d668bc51fbc72a7b0277f617cbeb8adae7" Mar 20 08:29:48 crc kubenswrapper[5136]: I0320 08:29:48.628928 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-5cjkp" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.277017 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.282372 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["crc-storage/crc-storage-crc-5cjkp"] Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.409543 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" path="/var/lib/kubelet/pods/13b3d6a1-236a-4eec-8755-d6673a652114/volumes" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.410379 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-m5j8k"] Mar 20 08:29:50 crc kubenswrapper[5136]: E0320 08:29:50.410785 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" containerName="storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.410852 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" containerName="storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.411133 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b3d6a1-236a-4eec-8755-d6673a652114" containerName="storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.412943 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.415474 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-m5j8k"] Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.415784 5136 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-65jln" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.416021 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.416930 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.417335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.586012 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.586110 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.586236 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687410 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687481 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687518 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.687656 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.688287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.711912 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"crc-storage-crc-m5j8k\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:50 crc kubenswrapper[5136]: I0320 08:29:50.730327 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:51 crc kubenswrapper[5136]: I0320 08:29:51.163337 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-m5j8k"] Mar 20 08:29:51 crc kubenswrapper[5136]: I0320 08:29:51.648450 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-m5j8k" event={"ID":"b15f2d65-52cd-4e08-b35d-63b4e2f7559c","Type":"ContainerStarted","Data":"148a4ca422766dccfdc07a7b1f7ac69ec65bdc456d2ba035b2dae2a739a96f6e"} Mar 20 08:29:52 crc kubenswrapper[5136]: I0320 08:29:52.656786 5136 generic.go:334] "Generic (PLEG): container finished" podID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerID="57c12b11e582d6b79221d78f58da4f5e7fc7894223d64657ac01cb8df0d9ebf6" exitCode=0 Mar 20 08:29:52 crc kubenswrapper[5136]: I0320 08:29:52.656833 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-m5j8k" event={"ID":"b15f2d65-52cd-4e08-b35d-63b4e2f7559c","Type":"ContainerDied","Data":"57c12b11e582d6b79221d78f58da4f5e7fc7894223d64657ac01cb8df0d9ebf6"} Mar 20 08:29:53 crc kubenswrapper[5136]: I0320 08:29:53.978467 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138725 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") pod \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138798 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") pod \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138951 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "b15f2d65-52cd-4e08-b35d-63b4e2f7559c" (UID: "b15f2d65-52cd-4e08-b35d-63b4e2f7559c"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.138982 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") pod \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\" (UID: \"b15f2d65-52cd-4e08-b35d-63b4e2f7559c\") " Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.139388 5136 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-node-mnt\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.144721 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv" (OuterVolumeSpecName: "kube-api-access-94bmv") pod "b15f2d65-52cd-4e08-b35d-63b4e2f7559c" (UID: "b15f2d65-52cd-4e08-b35d-63b4e2f7559c"). InnerVolumeSpecName "kube-api-access-94bmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.156659 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "b15f2d65-52cd-4e08-b35d-63b4e2f7559c" (UID: "b15f2d65-52cd-4e08-b35d-63b4e2f7559c"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.240454 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94bmv\" (UniqueName: \"kubernetes.io/projected/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-kube-api-access-94bmv\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.240495 5136 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/b15f2d65-52cd-4e08-b35d-63b4e2f7559c-crc-storage\") on node \"crc\" DevicePath \"\"" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.674112 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-m5j8k" event={"ID":"b15f2d65-52cd-4e08-b35d-63b4e2f7559c","Type":"ContainerDied","Data":"148a4ca422766dccfdc07a7b1f7ac69ec65bdc456d2ba035b2dae2a739a96f6e"} Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.674166 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="148a4ca422766dccfdc07a7b1f7ac69ec65bdc456d2ba035b2dae2a739a96f6e" Mar 20 08:29:54 crc kubenswrapper[5136]: I0320 08:29:54.674211 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-m5j8k" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.159532 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:30:00 crc kubenswrapper[5136]: E0320 08:30:00.160577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerName="storage" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.160601 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerName="storage" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.160975 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b15f2d65-52cd-4e08-b35d-63b4e2f7559c" containerName="storage" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.161680 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.165761 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.166163 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.166335 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.173892 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh"] Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.175181 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.177284 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.177505 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.190088 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.209043 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh"] Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.329516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.329625 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"auto-csr-approver-29566590-9pt5f\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.330073 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.330122 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432439 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432592 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.432713 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"auto-csr-approver-29566590-9pt5f\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.433797 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.444974 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.452694 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"collect-profiles-29566590-fplwh\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.454524 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"auto-csr-approver-29566590-9pt5f\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.495490 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.505454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:00 crc kubenswrapper[5136]: I0320 08:30:00.980805 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.039956 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh"] Mar 20 08:30:01 crc kubenswrapper[5136]: W0320 08:30:01.043735 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda095f941_55a7_43b1_b794_f5f9d3c1cc97.slice/crio-6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0 WatchSource:0}: Error finding container 6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0: Status 404 returned error can't find the container with id 6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0 Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.733859 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" event={"ID":"51c8efa5-d30c-4426-ad6e-4aa0880c0563","Type":"ContainerStarted","Data":"85efc13d83e2f9a1344594bc7b808fcfb8fd76ffae6c969416730d2de9aeaeb7"} Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.735932 5136 generic.go:334] "Generic (PLEG): container finished" podID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerID="e8e14d68b90f3a95efeb8ef9a4f1e50d1c91dec543afa6a9b52e138a774c4cde" exitCode=0 Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.735958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" event={"ID":"a095f941-55a7-43b1-b794-f5f9d3c1cc97","Type":"ContainerDied","Data":"e8e14d68b90f3a95efeb8ef9a4f1e50d1c91dec543afa6a9b52e138a774c4cde"} Mar 20 08:30:01 crc kubenswrapper[5136]: I0320 08:30:01.735977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" event={"ID":"a095f941-55a7-43b1-b794-f5f9d3c1cc97","Type":"ContainerStarted","Data":"6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0"} Mar 20 08:30:02 crc kubenswrapper[5136]: I0320 08:30:02.743067 5136 generic.go:334] "Generic (PLEG): container finished" podID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerID="d67dfe1060ac0ac0db1818a3ab60ffceda0123c6ffe3b59b89e0430a3ae809a2" exitCode=0 Mar 20 08:30:02 crc kubenswrapper[5136]: I0320 08:30:02.743380 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" event={"ID":"51c8efa5-d30c-4426-ad6e-4aa0880c0563","Type":"ContainerDied","Data":"d67dfe1060ac0ac0db1818a3ab60ffceda0123c6ffe3b59b89e0430a3ae809a2"} Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.001669 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.170404 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") pod \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.170555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") pod \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.170627 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") pod \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\" (UID: \"a095f941-55a7-43b1-b794-f5f9d3c1cc97\") " Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.171284 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume" (OuterVolumeSpecName: "config-volume") pod "a095f941-55a7-43b1-b794-f5f9d3c1cc97" (UID: "a095f941-55a7-43b1-b794-f5f9d3c1cc97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.176399 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr" (OuterVolumeSpecName: "kube-api-access-4rfpr") pod "a095f941-55a7-43b1-b794-f5f9d3c1cc97" (UID: "a095f941-55a7-43b1-b794-f5f9d3c1cc97"). InnerVolumeSpecName "kube-api-access-4rfpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.177656 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a095f941-55a7-43b1-b794-f5f9d3c1cc97" (UID: "a095f941-55a7-43b1-b794-f5f9d3c1cc97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.272169 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a095f941-55a7-43b1-b794-f5f9d3c1cc97-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.272216 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a095f941-55a7-43b1-b794-f5f9d3c1cc97-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.272229 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rfpr\" (UniqueName: \"kubernetes.io/projected/a095f941-55a7-43b1-b794-f5f9d3c1cc97-kube-api-access-4rfpr\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.756284 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.756995 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566590-fplwh" event={"ID":"a095f941-55a7-43b1-b794-f5f9d3c1cc97","Type":"ContainerDied","Data":"6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0"} Mar 20 08:30:03 crc kubenswrapper[5136]: I0320 08:30:03.757046 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c4018d3cf9ebe3bcd6750da3434af99036efba58b7789357bacd0f1a0f25ab0" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.079516 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.087624 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566545-r7zm7"] Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.095366 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.284125 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") pod \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\" (UID: \"51c8efa5-d30c-4426-ad6e-4aa0880c0563\") " Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.289036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn" (OuterVolumeSpecName: "kube-api-access-pjghn") pod "51c8efa5-d30c-4426-ad6e-4aa0880c0563" (UID: "51c8efa5-d30c-4426-ad6e-4aa0880c0563"). InnerVolumeSpecName "kube-api-access-pjghn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.385373 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjghn\" (UniqueName: \"kubernetes.io/projected/51c8efa5-d30c-4426-ad6e-4aa0880c0563-kube-api-access-pjghn\") on node \"crc\" DevicePath \"\"" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.406536 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb32e01f-d49f-4ba1-a1d4-c693765737e7" path="/var/lib/kubelet/pods/eb32e01f-d49f-4ba1-a1d4-c693765737e7/volumes" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.763179 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" event={"ID":"51c8efa5-d30c-4426-ad6e-4aa0880c0563","Type":"ContainerDied","Data":"85efc13d83e2f9a1344594bc7b808fcfb8fd76ffae6c969416730d2de9aeaeb7"} Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.763591 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85efc13d83e2f9a1344594bc7b808fcfb8fd76ffae6c969416730d2de9aeaeb7" Mar 20 08:30:04 crc kubenswrapper[5136]: I0320 08:30:04.763265 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566590-9pt5f" Mar 20 08:30:05 crc kubenswrapper[5136]: I0320 08:30:05.152849 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:30:05 crc kubenswrapper[5136]: I0320 08:30:05.159496 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566584-ht4pj"] Mar 20 08:30:06 crc kubenswrapper[5136]: I0320 08:30:06.409013 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2" path="/var/lib/kubelet/pods/49bc2e6c-77f7-42f1-ba1e-86bbc6bdc2d2/volumes" Mar 20 08:30:15 crc kubenswrapper[5136]: I0320 08:30:15.822429 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:30:15 crc kubenswrapper[5136]: I0320 08:30:15.823324 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:30:37 crc kubenswrapper[5136]: I0320 08:30:37.236567 5136 scope.go:117] "RemoveContainer" containerID="db23fd78398ebb125a153768bba0437d8fa09615fe8803585f26e9cdf330d2a9" Mar 20 08:30:37 crc kubenswrapper[5136]: I0320 08:30:37.284892 5136 scope.go:117] "RemoveContainer" containerID="b6e56033203d796df41b39eddfb04e55cd2822f9ba7f0e9edd26141d7d5d92b3" Mar 20 08:30:37 crc kubenswrapper[5136]: I0320 08:30:37.361361 5136 scope.go:117] "RemoveContainer" containerID="1634bfed9d3426f391a9ba220363e60d18b7a13e0b5dd7787df7f812b3c4e0ea" Mar 20 08:30:45 crc kubenswrapper[5136]: I0320 08:30:45.822397 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:30:45 crc kubenswrapper[5136]: I0320 08:30:45.823133 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.822493 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.823224 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.823329 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.824175 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:31:15 crc kubenswrapper[5136]: I0320 08:31:15.824256 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" gracePeriod=600 Mar 20 08:31:15 crc kubenswrapper[5136]: E0320 08:31:15.956600 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.354638 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" exitCode=0 Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.354687 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5"} Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.354753 5136 scope.go:117] "RemoveContainer" containerID="1978392c6dea0f795648b9101e593b34f84554a1dfb9a32198f55771c5e697bb" Mar 20 08:31:16 crc kubenswrapper[5136]: I0320 08:31:16.355571 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:16 crc kubenswrapper[5136]: E0320 08:31:16.356441 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:28 crc kubenswrapper[5136]: I0320 08:31:28.405262 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:28 crc kubenswrapper[5136]: E0320 08:31:28.406590 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:41 crc kubenswrapper[5136]: I0320 08:31:41.397519 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:41 crc kubenswrapper[5136]: E0320 08:31:41.398651 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:31:56 crc kubenswrapper[5136]: I0320 08:31:56.396450 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:31:56 crc kubenswrapper[5136]: E0320 08:31:56.397538 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180164 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:32:00 crc kubenswrapper[5136]: E0320 08:32:00.180454 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerName="oc" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180466 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerName="oc" Mar 20 08:32:00 crc kubenswrapper[5136]: E0320 08:32:00.180489 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerName="collect-profiles" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180496 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerName="collect-profiles" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180610 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a095f941-55a7-43b1-b794-f5f9d3c1cc97" containerName="collect-profiles" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.180623 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" containerName="oc" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.181033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.183940 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.183943 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.184215 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.198452 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.278624 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"auto-csr-approver-29566592-gnh7d\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.380582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"auto-csr-approver-29566592-gnh7d\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.403900 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"auto-csr-approver-29566592-gnh7d\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.499497 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.781540 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:32:00 crc kubenswrapper[5136]: I0320 08:32:00.795238 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:32:01 crc kubenswrapper[5136]: I0320 08:32:01.727326 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" event={"ID":"2e2af690-159e-4938-b0b0-35e042cc8393","Type":"ContainerStarted","Data":"2e337ec26bbbf76ef0d66c40704a9d16485d50d4b23704445de55eadafddf622"} Mar 20 08:32:02 crc kubenswrapper[5136]: I0320 08:32:02.734943 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e2af690-159e-4938-b0b0-35e042cc8393" containerID="6f1e73339774fdb849b7c14ca46c4e23637ecc11d975480f8593fb668065f9a0" exitCode=0 Mar 20 08:32:02 crc kubenswrapper[5136]: I0320 08:32:02.735006 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" event={"ID":"2e2af690-159e-4938-b0b0-35e042cc8393","Type":"ContainerDied","Data":"6f1e73339774fdb849b7c14ca46c4e23637ecc11d975480f8593fb668065f9a0"} Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.058427 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.141198 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") pod \"2e2af690-159e-4938-b0b0-35e042cc8393\" (UID: \"2e2af690-159e-4938-b0b0-35e042cc8393\") " Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.147952 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk" (OuterVolumeSpecName: "kube-api-access-cz7sk") pod "2e2af690-159e-4938-b0b0-35e042cc8393" (UID: "2e2af690-159e-4938-b0b0-35e042cc8393"). InnerVolumeSpecName "kube-api-access-cz7sk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.243039 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cz7sk\" (UniqueName: \"kubernetes.io/projected/2e2af690-159e-4938-b0b0-35e042cc8393-kube-api-access-cz7sk\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.621943 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:04 crc kubenswrapper[5136]: E0320 08:32:04.622510 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" containerName="oc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.622528 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" containerName="oc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.622657 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" containerName="oc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.623342 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.624785 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.625067 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.625068 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.633737 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.635220 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.637689 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.637751 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-p8dqf" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.641922 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.654193 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.654607 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.654792 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.655083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.655362 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.673160 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.748860 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" event={"ID":"2e2af690-159e-4938-b0b0-35e042cc8393","Type":"ContainerDied","Data":"2e337ec26bbbf76ef0d66c40704a9d16485d50d4b23704445de55eadafddf622"} Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.748895 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e337ec26bbbf76ef0d66c40704a9d16485d50d4b23704445de55eadafddf622" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.748923 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566592-gnh7d" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756382 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756447 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.756530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.757587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.757618 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.757765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.774505 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"dnsmasq-dns-6648865bb9-n4kmb\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.786884 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"dnsmasq-dns-86ffc6867-88z5z\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.944392 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:04 crc kubenswrapper[5136]: I0320 08:32:04.953047 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.147637 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.156232 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566586-tq64l"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.213880 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.388840 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.437289 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.439433 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.449405 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.470134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.470173 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.470212 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.553762 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.571265 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.571377 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.571399 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.572419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.572429 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.613418 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"dnsmasq-dns-7f7c7bdb8c-sdq9q\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.725189 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.751373 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.752897 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.760850 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.761917 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.774237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.774321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.774419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.782975 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" event={"ID":"e4a91420-177b-479a-aeb6-0fdc31a375e7","Type":"ContainerStarted","Data":"ed5c7fa75820fbac3dc8454615e81f7d5004c1cabef4298a4f0b3764020471ee"} Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.786190 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" event={"ID":"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0","Type":"ContainerStarted","Data":"4912bc2f2e88dde7c6725f660921ca75fb40083a65fdbf07840e1559e1bb6656"} Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.879344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.879435 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.879453 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.881455 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.882929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:05 crc kubenswrapper[5136]: I0320 08:32:05.912880 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"dnsmasq-dns-685785d49f-r6vtp\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.077896 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.353210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.424519 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="201194d4-8f03-49d4-bf30-d69ece3e6d30" path="/var/lib/kubelet/pods/201194d4-8f03-49d4-bf30-d69ece3e6d30/volumes" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.611631 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.613024 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.614437 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hxrcr" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.614898 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615003 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615152 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615283 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615366 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.615670 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.630372 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.696533 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700188 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700255 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700497 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700551 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700589 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.700756 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.797793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerStarted","Data":"b875bc351439176b0ec46aee9af86021a8df8774ef7b812887178dc8835e12d7"} Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.799793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" event={"ID":"3045f340-8dd6-4a70-8407-ca021577d30c","Type":"ContainerStarted","Data":"4aa6b047cacc9f7c7b7b22f5ea520282914082a02383d0fe88b6a663fa015092"} Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801573 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801605 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801646 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801771 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801808 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801854 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.801871 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.802846 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.803122 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.804544 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.804658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.805623 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.809422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.809621 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.809649 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffa5bed1f53993894ec26cbc3fe1cf1f67f60a4766508e053a6d4d74251ebc8b/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.811401 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.812190 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.818316 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.822272 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.842254 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " pod="openstack/rabbitmq-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.883294 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.884341 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887111 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887249 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887294 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887436 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887502 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rxs59" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887579 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.887695 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.898387 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903001 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903074 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903239 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903293 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903325 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903488 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903559 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903617 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903714 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.903794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:06 crc kubenswrapper[5136]: I0320 08:32:06.977063 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004824 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004913 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004931 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004948 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.004994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.005024 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.005050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.005075 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.006252 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.006522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.008301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.008909 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.009627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.010095 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.010119 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0b8d36279754dae866b74592d574f198fefe86644de71828fcab427244d57e1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.011123 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.011215 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.013904 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.014985 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.023549 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.075102 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.207507 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.337422 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.341448 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.345546 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.350077 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.350081 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.350198 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vj2gs" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.352139 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.370601 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.421554 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: W0320 08:32:07.450496 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49b925de_d698_4589_9f71_cf485dd617d2.slice/crio-e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67 WatchSource:0}: Error finding container e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67: Status 404 returned error can't find the container with id e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67 Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512470 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512607 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512679 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512726 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512767 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512793 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512842 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.512933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616452 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616512 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616544 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616591 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616667 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.616692 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.617093 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.618634 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.619497 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.620186 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.621937 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.621969 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1dd8871675ece06a49a0475a0d4042d4a0827aefbbf3be5b2f22e999c485a8b1/globalmount\"" pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.623670 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.624558 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.632998 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.680932 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"openstack-galera-0\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " pod="openstack/openstack-galera-0" Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.694243 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.876004 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerStarted","Data":"7392b8c85d117a71e5c4a2c47ce52f8f48e947a5928969391300b523dfc80f5d"} Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.878608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerStarted","Data":"e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67"} Mar 20 08:32:07 crc kubenswrapper[5136]: I0320 08:32:07.965131 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.408994 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:08 crc kubenswrapper[5136]: E0320 08:32:08.409569 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.425303 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.886738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerStarted","Data":"a3ca3c82737ff9666b59eaa26c9fcedbcaf8829fd4670afceb4988d0c1b4a157"} Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.951484 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.953004 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.956211 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.957428 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.958146 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-mtswd" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.959531 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Mar 20 08:32:08 crc kubenswrapper[5136]: I0320 08:32:08.963557 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.146787 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.146877 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.146912 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147211 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147249 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.147306 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.248409 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251637 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251665 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251691 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251720 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.251784 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.252028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.252073 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.252914 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.250099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.253432 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.256176 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.256211 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/a5c355bf1a7209505d65ff14ef56fc7e9b635a23bbc586188890ca98ac6ccf4c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.256825 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.264384 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.265054 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.304252 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.304987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.306048 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.312292 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-6n4pc" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.312936 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.313391 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.316285 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"openstack-cell1-galera-0\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.332760 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.346857 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.468871 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469249 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469286 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.469394 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.571293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.572404 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.572880 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574001 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574456 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.574491 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.576986 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.577495 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.596608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"memcached-0\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.678772 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 08:32:09 crc kubenswrapper[5136]: I0320 08:32:09.876123 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 08:32:10 crc kubenswrapper[5136]: I0320 08:32:10.183599 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Mar 20 08:32:10 crc kubenswrapper[5136]: I0320 08:32:10.908405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerStarted","Data":"9cf9cadd89e2b28a829e6e81692bf2693c40f2c59fbdfc4c88536b7ae65a16d3"} Mar 20 08:32:10 crc kubenswrapper[5136]: I0320 08:32:10.912343 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerStarted","Data":"8744afbb6fc5b78de44cc1ad3a2d2c06bbc7e574d3d24b9b63a1c4c9c4199a2b"} Mar 20 08:32:23 crc kubenswrapper[5136]: I0320 08:32:23.396544 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:23 crc kubenswrapper[5136]: E0320 08:32:23.397292 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.266403 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.266726 5136 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.266850 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n8chc6h5bh56fh546hb7hc8h67h5bchffh577h697h5b5h5bdh59bhf6hf4h558hb5h578h595h5cchfbh644h59ch7fh654h547h587h5cbh5d5h8fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-czkhl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7f7c7bdb8c-sdq9q_openstack(3045f340-8dd6-4a70-8407-ca021577d30c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 08:32:33 crc kubenswrapper[5136]: E0320 08:32:33.267982 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" podUID="3045f340-8dd6-4a70-8407-ca021577d30c" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.104464 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef\\\"\"" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" podUID="3045f340-8dd6-4a70-8407-ca021577d30c" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.241395 5136 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.241449 5136 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.242056 5136 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-neutron-server:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n647h57bh695h68dh54fhf5hc5h67h5d4hb6h696h685h54ch6h599h5c5h679h74h689h644h5c8h64ch555h5c6h5dh569h698h59fh66ch57bh5b9hb7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm25t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6648865bb9-n4kmb_openstack(2c42ef9c-e931-4771-9d5f-3fa6b2a851c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.243914 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" podUID="2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" Mar 20 08:32:34 crc kubenswrapper[5136]: I0320 08:32:34.396759 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:34 crc kubenswrapper[5136]: E0320 08:32:34.397032 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.110202 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerStarted","Data":"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.110525 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.112706 5136 generic.go:334] "Generic (PLEG): container finished" podID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerID="5f4f3dcd0f729e1778bba488d0db6bc5470314dde01c438d740b106a4b7b2bc2" exitCode=0 Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.112772 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" event={"ID":"e4a91420-177b-479a-aeb6-0fdc31a375e7","Type":"ContainerDied","Data":"5f4f3dcd0f729e1778bba488d0db6bc5470314dde01c438d740b106a4b7b2bc2"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.116147 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerStarted","Data":"249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.118129 5136 generic.go:334] "Generic (PLEG): container finished" podID="da7b3de9-906c-4470-9b45-498268d7161b" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" exitCode=0 Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.118189 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerDied","Data":"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.120239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerStarted","Data":"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77"} Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.128479 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.222406253 podStartE2EDuration="26.128465123s" podCreationTimestamp="2026-03-20 08:32:09 +0000 UTC" firstStartedPulling="2026-03-20 08:32:10.211203214 +0000 UTC m=+6162.470514365" lastFinishedPulling="2026-03-20 08:32:34.117262054 +0000 UTC m=+6186.376573235" observedRunningTime="2026-03-20 08:32:35.12838611 +0000 UTC m=+6187.387697291" watchObservedRunningTime="2026-03-20 08:32:35.128465123 +0000 UTC m=+6187.387776274" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.463029 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.467647 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561880 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") pod \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561946 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") pod \"e4a91420-177b-479a-aeb6-0fdc31a375e7\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561969 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") pod \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\" (UID: \"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.561986 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") pod \"e4a91420-177b-479a-aeb6-0fdc31a375e7\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.562065 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") pod \"e4a91420-177b-479a-aeb6-0fdc31a375e7\" (UID: \"e4a91420-177b-479a-aeb6-0fdc31a375e7\") " Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.562637 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config" (OuterVolumeSpecName: "config") pod "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" (UID: "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.563156 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.568143 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t" (OuterVolumeSpecName: "kube-api-access-zm25t") pod "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" (UID: "2c42ef9c-e931-4771-9d5f-3fa6b2a851c0"). InnerVolumeSpecName "kube-api-access-zm25t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.569140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7" (OuterVolumeSpecName: "kube-api-access-l9zw7") pod "e4a91420-177b-479a-aeb6-0fdc31a375e7" (UID: "e4a91420-177b-479a-aeb6-0fdc31a375e7"). InnerVolumeSpecName "kube-api-access-l9zw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.580046 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e4a91420-177b-479a-aeb6-0fdc31a375e7" (UID: "e4a91420-177b-479a-aeb6-0fdc31a375e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.580538 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config" (OuterVolumeSpecName: "config") pod "e4a91420-177b-479a-aeb6-0fdc31a375e7" (UID: "e4a91420-177b-479a-aeb6-0fdc31a375e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.664675 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9zw7\" (UniqueName: \"kubernetes.io/projected/e4a91420-177b-479a-aeb6-0fdc31a375e7-kube-api-access-l9zw7\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.665003 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.665017 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zm25t\" (UniqueName: \"kubernetes.io/projected/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0-kube-api-access-zm25t\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:35 crc kubenswrapper[5136]: I0320 08:32:35.665029 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4a91420-177b-479a-aeb6-0fdc31a375e7-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.128442 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerStarted","Data":"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.129843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" event={"ID":"e4a91420-177b-479a-aeb6-0fdc31a375e7","Type":"ContainerDied","Data":"ed5c7fa75820fbac3dc8454615e81f7d5004c1cabef4298a4f0b3764020471ee"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.129881 5136 scope.go:117] "RemoveContainer" containerID="5f4f3dcd0f729e1778bba488d0db6bc5470314dde01c438d740b106a4b7b2bc2" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.129885 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86ffc6867-88z5z" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.131244 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerStarted","Data":"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.133669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerStarted","Data":"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.134152 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.135470 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" event={"ID":"2c42ef9c-e931-4771-9d5f-3fa6b2a851c0","Type":"ContainerDied","Data":"4912bc2f2e88dde7c6725f660921ca75fb40083a65fdbf07840e1559e1bb6656"} Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.135593 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6648865bb9-n4kmb" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.190086 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" podStartSLOduration=3.598541481 podStartE2EDuration="31.190059567s" podCreationTimestamp="2026-03-20 08:32:05 +0000 UTC" firstStartedPulling="2026-03-20 08:32:06.708989284 +0000 UTC m=+6158.968300435" lastFinishedPulling="2026-03-20 08:32:34.30050737 +0000 UTC m=+6186.559818521" observedRunningTime="2026-03-20 08:32:36.178901573 +0000 UTC m=+6188.438212734" watchObservedRunningTime="2026-03-20 08:32:36.190059567 +0000 UTC m=+6188.449370718" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.251838 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.258709 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6648865bb9-n4kmb"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.292234 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.301749 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86ffc6867-88z5z"] Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.406570 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c42ef9c-e931-4771-9d5f-3fa6b2a851c0" path="/var/lib/kubelet/pods/2c42ef9c-e931-4771-9d5f-3fa6b2a851c0/volumes" Mar 20 08:32:36 crc kubenswrapper[5136]: I0320 08:32:36.406943 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" path="/var/lib/kubelet/pods/e4a91420-177b-479a-aeb6-0fdc31a375e7/volumes" Mar 20 08:32:37 crc kubenswrapper[5136]: I0320 08:32:37.461877 5136 scope.go:117] "RemoveContainer" containerID="53356b00d0884cc08ef3105861c0ae9d4bfaf917f6a2b9dfbe1bccff6dec5b55" Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.168271 5136 generic.go:334] "Generic (PLEG): container finished" podID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" exitCode=0 Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.168372 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerDied","Data":"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77"} Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.175744 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerID="249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc" exitCode=0 Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.175782 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerDied","Data":"249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc"} Mar 20 08:32:39 crc kubenswrapper[5136]: I0320 08:32:39.683190 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Mar 20 08:32:40 crc kubenswrapper[5136]: I0320 08:32:40.189871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerStarted","Data":"a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153"} Mar 20 08:32:40 crc kubenswrapper[5136]: I0320 08:32:40.192214 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerStarted","Data":"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe"} Mar 20 08:32:40 crc kubenswrapper[5136]: I0320 08:32:40.211571 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=8.838512441 podStartE2EDuration="33.211553273s" podCreationTimestamp="2026-03-20 08:32:07 +0000 UTC" firstStartedPulling="2026-03-20 08:32:09.90032936 +0000 UTC m=+6162.159640511" lastFinishedPulling="2026-03-20 08:32:34.273370192 +0000 UTC m=+6186.532681343" observedRunningTime="2026-03-20 08:32:40.206234409 +0000 UTC m=+6192.465545560" watchObservedRunningTime="2026-03-20 08:32:40.211553273 +0000 UTC m=+6192.470864424" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.080031 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.110158 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.308151123 podStartE2EDuration="35.110125557s" podCreationTimestamp="2026-03-20 08:32:06 +0000 UTC" firstStartedPulling="2026-03-20 08:32:08.444282651 +0000 UTC m=+6160.703593802" lastFinishedPulling="2026-03-20 08:32:34.246257085 +0000 UTC m=+6186.505568236" observedRunningTime="2026-03-20 08:32:40.236211955 +0000 UTC m=+6192.495523106" watchObservedRunningTime="2026-03-20 08:32:41.110125557 +0000 UTC m=+6193.369436718" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.139751 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.409918 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") pod \"3045f340-8dd6-4a70-8407-ca021577d30c\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458521 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") pod \"3045f340-8dd6-4a70-8407-ca021577d30c\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") pod \"3045f340-8dd6-4a70-8407-ca021577d30c\" (UID: \"3045f340-8dd6-4a70-8407-ca021577d30c\") " Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458842 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config" (OuterVolumeSpecName: "config") pod "3045f340-8dd6-4a70-8407-ca021577d30c" (UID: "3045f340-8dd6-4a70-8407-ca021577d30c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.458947 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3045f340-8dd6-4a70-8407-ca021577d30c" (UID: "3045f340-8dd6-4a70-8407-ca021577d30c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.460350 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.460388 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3045f340-8dd6-4a70-8407-ca021577d30c-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.477139 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl" (OuterVolumeSpecName: "kube-api-access-czkhl") pod "3045f340-8dd6-4a70-8407-ca021577d30c" (UID: "3045f340-8dd6-4a70-8407-ca021577d30c"). InnerVolumeSpecName "kube-api-access-czkhl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:41 crc kubenswrapper[5136]: I0320 08:32:41.562907 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czkhl\" (UniqueName: \"kubernetes.io/projected/3045f340-8dd6-4a70-8407-ca021577d30c-kube-api-access-czkhl\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.213211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" event={"ID":"3045f340-8dd6-4a70-8407-ca021577d30c","Type":"ContainerDied","Data":"4aa6b047cacc9f7c7b7b22f5ea520282914082a02383d0fe88b6a663fa015092"} Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.213624 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q" Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.293236 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.304208 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f7c7bdb8c-sdq9q"] Mar 20 08:32:42 crc kubenswrapper[5136]: I0320 08:32:42.404871 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3045f340-8dd6-4a70-8407-ca021577d30c" path="/var/lib/kubelet/pods/3045f340-8dd6-4a70-8407-ca021577d30c/volumes" Mar 20 08:32:47 crc kubenswrapper[5136]: I0320 08:32:47.397117 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:32:47 crc kubenswrapper[5136]: E0320 08:32:47.397901 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:32:47 crc kubenswrapper[5136]: I0320 08:32:47.966753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Mar 20 08:32:47 crc kubenswrapper[5136]: I0320 08:32:47.967178 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Mar 20 08:32:48 crc kubenswrapper[5136]: I0320 08:32:48.203632 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Mar 20 08:32:48 crc kubenswrapper[5136]: I0320 08:32:48.337453 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Mar 20 08:32:49 crc kubenswrapper[5136]: I0320 08:32:49.347660 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:49 crc kubenswrapper[5136]: I0320 08:32:49.347709 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:49 crc kubenswrapper[5136]: I0320 08:32:49.444740 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:50 crc kubenswrapper[5136]: I0320 08:32:50.371648 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.326325 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:32:56 crc kubenswrapper[5136]: E0320 08:32:56.327042 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerName="init" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.327063 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerName="init" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.327347 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4a91420-177b-479a-aeb6-0fdc31a375e7" containerName="init" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.328081 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.333149 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.338463 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.409696 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.409990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.512140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.512249 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.513426 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.541867 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"root-account-create-update-94vhs\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.662043 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:56 crc kubenswrapper[5136]: I0320 08:32:56.939574 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:32:56 crc kubenswrapper[5136]: W0320 08:32:56.947005 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode70bfda2_008f_4f6f_87a1_a349df41af80.slice/crio-8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23 WatchSource:0}: Error finding container 8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23: Status 404 returned error can't find the container with id 8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23 Mar 20 08:32:57 crc kubenswrapper[5136]: I0320 08:32:57.346562 5136 generic.go:334] "Generic (PLEG): container finished" podID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerID="513bfb357219a2477e16b62515cf153229315c47b91f7217842266ad20b33891" exitCode=0 Mar 20 08:32:57 crc kubenswrapper[5136]: I0320 08:32:57.346856 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-94vhs" event={"ID":"e70bfda2-008f-4f6f-87a1-a349df41af80","Type":"ContainerDied","Data":"513bfb357219a2477e16b62515cf153229315c47b91f7217842266ad20b33891"} Mar 20 08:32:57 crc kubenswrapper[5136]: I0320 08:32:57.346889 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-94vhs" event={"ID":"e70bfda2-008f-4f6f-87a1-a349df41af80","Type":"ContainerStarted","Data":"8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23"} Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.658242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.745281 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") pod \"e70bfda2-008f-4f6f-87a1-a349df41af80\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.745377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") pod \"e70bfda2-008f-4f6f-87a1-a349df41af80\" (UID: \"e70bfda2-008f-4f6f-87a1-a349df41af80\") " Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.746162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e70bfda2-008f-4f6f-87a1-a349df41af80" (UID: "e70bfda2-008f-4f6f-87a1-a349df41af80"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.750993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q" (OuterVolumeSpecName: "kube-api-access-s7b2q") pod "e70bfda2-008f-4f6f-87a1-a349df41af80" (UID: "e70bfda2-008f-4f6f-87a1-a349df41af80"). InnerVolumeSpecName "kube-api-access-s7b2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.847606 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7b2q\" (UniqueName: \"kubernetes.io/projected/e70bfda2-008f-4f6f-87a1-a349df41af80-kube-api-access-s7b2q\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:58 crc kubenswrapper[5136]: I0320 08:32:58.847656 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e70bfda2-008f-4f6f-87a1-a349df41af80-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:32:59 crc kubenswrapper[5136]: I0320 08:32:59.363683 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-94vhs" event={"ID":"e70bfda2-008f-4f6f-87a1-a349df41af80","Type":"ContainerDied","Data":"8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23"} Mar 20 08:32:59 crc kubenswrapper[5136]: I0320 08:32:59.364137 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f245849a454b4c3de125ed30349af198edff46174fbc27459e8243ef715bf23" Mar 20 08:32:59 crc kubenswrapper[5136]: I0320 08:32:59.363905 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-94vhs" Mar 20 08:33:01 crc kubenswrapper[5136]: I0320 08:33:01.398317 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:01 crc kubenswrapper[5136]: E0320 08:33:01.398738 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:02 crc kubenswrapper[5136]: I0320 08:33:02.902082 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:33:02 crc kubenswrapper[5136]: I0320 08:33:02.908123 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-94vhs"] Mar 20 08:33:04 crc kubenswrapper[5136]: I0320 08:33:04.416983 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" path="/var/lib/kubelet/pods/e70bfda2-008f-4f6f-87a1-a349df41af80/volumes" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.437883 5136 generic.go:334] "Generic (PLEG): container finished" podID="49b925de-d698-4589-9f71-cf485dd617d2" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" exitCode=0 Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.438033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerDied","Data":"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e"} Mar 20 08:33:07 crc kubenswrapper[5136]: E0320 08:33:07.634285 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod144d1953_0072_4346_9aa6_83afc44fdb3b.slice/crio-conmon-35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.909534 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:33:07 crc kubenswrapper[5136]: E0320 08:33:07.909910 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerName="mariadb-account-create-update" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.909931 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerName="mariadb-account-create-update" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.910115 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e70bfda2-008f-4f6f-87a1-a349df41af80" containerName="mariadb-account-create-update" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.910669 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.914058 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 08:33:07 crc kubenswrapper[5136]: I0320 08:33:07.919266 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.018743 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.019201 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.120661 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.120733 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.121924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.142794 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"root-account-create-update-sbbcl\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.225642 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.446193 5136 generic.go:334] "Generic (PLEG): container finished" podID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" exitCode=0 Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.446292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerDied","Data":"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f"} Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.451201 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerStarted","Data":"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682"} Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.452325 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.507632 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.705989842 podStartE2EDuration="1m3.507597248s" podCreationTimestamp="2026-03-20 08:32:05 +0000 UTC" firstStartedPulling="2026-03-20 08:32:07.456512825 +0000 UTC m=+6159.715823966" lastFinishedPulling="2026-03-20 08:32:34.258120221 +0000 UTC m=+6186.517431372" observedRunningTime="2026-03-20 08:33:08.506865875 +0000 UTC m=+6220.766177046" watchObservedRunningTime="2026-03-20 08:33:08.507597248 +0000 UTC m=+6220.766908409" Mar 20 08:33:08 crc kubenswrapper[5136]: I0320 08:33:08.633965 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:33:08 crc kubenswrapper[5136]: W0320 08:33:08.636227 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda25720ab_064e_40ce_ae93_03dd9c33cf66.slice/crio-ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4 WatchSource:0}: Error finding container ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4: Status 404 returned error can't find the container with id ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4 Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.463128 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerStarted","Data":"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef"} Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.463764 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.466748 5136 generic.go:334] "Generic (PLEG): container finished" podID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerID="77614674db5f14222adee033ce4bf5c60259ff8d124c2f5a8301de91d769caa0" exitCode=0 Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.467183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbbcl" event={"ID":"a25720ab-064e-40ce-ae93-03dd9c33cf66","Type":"ContainerDied","Data":"77614674db5f14222adee033ce4bf5c60259ff8d124c2f5a8301de91d769caa0"} Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.467228 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbbcl" event={"ID":"a25720ab-064e-40ce-ae93-03dd9c33cf66","Type":"ContainerStarted","Data":"ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4"} Mar 20 08:33:09 crc kubenswrapper[5136]: I0320 08:33:09.501529 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.943516869 podStartE2EDuration="1m4.501509107s" podCreationTimestamp="2026-03-20 08:32:05 +0000 UTC" firstStartedPulling="2026-03-20 08:32:07.717042225 +0000 UTC m=+6159.976353376" lastFinishedPulling="2026-03-20 08:32:34.275034463 +0000 UTC m=+6186.534345614" observedRunningTime="2026-03-20 08:33:09.493330523 +0000 UTC m=+6221.752641714" watchObservedRunningTime="2026-03-20 08:33:09.501509107 +0000 UTC m=+6221.760820268" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.736096 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.871945 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") pod \"a25720ab-064e-40ce-ae93-03dd9c33cf66\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.872024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") pod \"a25720ab-064e-40ce-ae93-03dd9c33cf66\" (UID: \"a25720ab-064e-40ce-ae93-03dd9c33cf66\") " Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.872596 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a25720ab-064e-40ce-ae93-03dd9c33cf66" (UID: "a25720ab-064e-40ce-ae93-03dd9c33cf66"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.879668 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn" (OuterVolumeSpecName: "kube-api-access-zmnkn") pod "a25720ab-064e-40ce-ae93-03dd9c33cf66" (UID: "a25720ab-064e-40ce-ae93-03dd9c33cf66"). InnerVolumeSpecName "kube-api-access-zmnkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.974135 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a25720ab-064e-40ce-ae93-03dd9c33cf66-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:10 crc kubenswrapper[5136]: I0320 08:33:10.974181 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmnkn\" (UniqueName: \"kubernetes.io/projected/a25720ab-064e-40ce-ae93-03dd9c33cf66-kube-api-access-zmnkn\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:11 crc kubenswrapper[5136]: I0320 08:33:11.483463 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-sbbcl" event={"ID":"a25720ab-064e-40ce-ae93-03dd9c33cf66","Type":"ContainerDied","Data":"ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4"} Mar 20 08:33:11 crc kubenswrapper[5136]: I0320 08:33:11.483508 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-sbbcl" Mar 20 08:33:11 crc kubenswrapper[5136]: I0320 08:33:11.483508 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed0468d68dafe7c8658aef8d42fdf308c526844b023af76ef639c07d3a3f2ce4" Mar 20 08:33:13 crc kubenswrapper[5136]: I0320 08:33:13.396539 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:13 crc kubenswrapper[5136]: E0320 08:33:13.396797 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.345360 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:26 crc kubenswrapper[5136]: E0320 08:33:26.347448 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerName="mariadb-account-create-update" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.347562 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerName="mariadb-account-create-update" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.347837 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" containerName="mariadb-account-create-update" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.349419 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.353296 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.406251 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.406339 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.406383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.507788 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.507861 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.507940 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.508419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.508575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.530991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"redhat-operators-87czr\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.537117 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.538670 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.571650 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.611204 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.611298 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.611379 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.670455 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.712400 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.712671 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.712736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.713146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.713351 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.732700 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"certified-operators-l9dsr\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.893505 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:26 crc kubenswrapper[5136]: I0320 08:33:26.979225 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.127067 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.214181 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.396540 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:27 crc kubenswrapper[5136]: E0320 08:33:27.396752 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.439645 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:27 crc kubenswrapper[5136]: W0320 08:33:27.470399 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d67c5ab_0096_4c4a_aaf1_f7b2e0ea2281.slice/crio-74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c WatchSource:0}: Error finding container 74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c: Status 404 returned error can't find the container with id 74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.598804 5136 generic.go:334] "Generic (PLEG): container finished" podID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" exitCode=0 Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.598902 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a"} Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.598947 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerStarted","Data":"56b0c4c299b46691e68d61c887ac0f6c17c1ba615dd505f5078a7f25fba0c4ba"} Mar 20 08:33:27 crc kubenswrapper[5136]: I0320 08:33:27.600126 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerStarted","Data":"74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c"} Mar 20 08:33:28 crc kubenswrapper[5136]: I0320 08:33:28.608388 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerStarted","Data":"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175"} Mar 20 08:33:28 crc kubenswrapper[5136]: I0320 08:33:28.609461 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerID="2ec94eecde00968de015f346bb8ec607450091902b850fcfe7165fa270145bd4" exitCode=0 Mar 20 08:33:28 crc kubenswrapper[5136]: I0320 08:33:28.609496 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"2ec94eecde00968de015f346bb8ec607450091902b850fcfe7165fa270145bd4"} Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.617428 5136 generic.go:334] "Generic (PLEG): container finished" podID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" exitCode=0 Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.617507 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175"} Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.622023 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerID="2efc9186cba61829377c5c9c206d8e8991b7bb27d75aebe001c623815beef975" exitCode=0 Mar 20 08:33:29 crc kubenswrapper[5136]: I0320 08:33:29.622066 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"2efc9186cba61829377c5c9c206d8e8991b7bb27d75aebe001c623815beef975"} Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.631438 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerStarted","Data":"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5"} Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.634212 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerStarted","Data":"f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999"} Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.658983 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-87czr" podStartSLOduration=2.201307266 podStartE2EDuration="4.658965871s" podCreationTimestamp="2026-03-20 08:33:26 +0000 UTC" firstStartedPulling="2026-03-20 08:33:27.600310318 +0000 UTC m=+6239.859621469" lastFinishedPulling="2026-03-20 08:33:30.057968923 +0000 UTC m=+6242.317280074" observedRunningTime="2026-03-20 08:33:30.653934135 +0000 UTC m=+6242.913245286" watchObservedRunningTime="2026-03-20 08:33:30.658965871 +0000 UTC m=+6242.918277022" Mar 20 08:33:30 crc kubenswrapper[5136]: I0320 08:33:30.678037 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l9dsr" podStartSLOduration=3.285593538 podStartE2EDuration="4.678014682s" podCreationTimestamp="2026-03-20 08:33:26 +0000 UTC" firstStartedPulling="2026-03-20 08:33:28.611316057 +0000 UTC m=+6240.870627208" lastFinishedPulling="2026-03-20 08:33:30.003737201 +0000 UTC m=+6242.263048352" observedRunningTime="2026-03-20 08:33:30.671503279 +0000 UTC m=+6242.930814430" watchObservedRunningTime="2026-03-20 08:33:30.678014682 +0000 UTC m=+6242.937325843" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.860845 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.861993 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.881906 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.897526 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.897565 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.897668 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.998685 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.998745 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:32 crc kubenswrapper[5136]: I0320 08:33:32.998922 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.000031 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.000190 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.022212 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"dnsmasq-dns-5b5c84b9cc-cxrps\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.181500 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.588158 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:33 crc kubenswrapper[5136]: I0320 08:33:33.680978 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.469152 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.667221 5136 generic.go:334] "Generic (PLEG): container finished" podID="e61df6ca-2419-400a-8790-9695f75c6d92" containerID="02dd795cb150362efe906bc099f470a71a335d9458efc922a23eb6c04569901e" exitCode=0 Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.667271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerDied","Data":"02dd795cb150362efe906bc099f470a71a335d9458efc922a23eb6c04569901e"} Mar 20 08:33:34 crc kubenswrapper[5136]: I0320 08:33:34.667301 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerStarted","Data":"5f995642f784fe24cc982d1a64669bee0b35f9981d3b63db2aa6b8236cd2ea18"} Mar 20 08:33:35 crc kubenswrapper[5136]: I0320 08:33:35.676055 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerStarted","Data":"8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673"} Mar 20 08:33:35 crc kubenswrapper[5136]: I0320 08:33:35.676481 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:35 crc kubenswrapper[5136]: I0320 08:33:35.700968 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" podStartSLOduration=3.700950191 podStartE2EDuration="3.700950191s" podCreationTimestamp="2026-03-20 08:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:33:35.697201024 +0000 UTC m=+6247.956512175" watchObservedRunningTime="2026-03-20 08:33:35.700950191 +0000 UTC m=+6247.960261342" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.671089 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.671162 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.894236 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.894595 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:36 crc kubenswrapper[5136]: I0320 08:33:36.943005 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.729472 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.743547 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-87czr" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" probeResult="failure" output=< Mar 20 08:33:37 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:33:37 crc kubenswrapper[5136]: > Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.772679 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:37 crc kubenswrapper[5136]: I0320 08:33:37.815212 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" containerID="cri-o://112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" gracePeriod=604796 Mar 20 08:33:38 crc kubenswrapper[5136]: I0320 08:33:38.801851 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" containerID="cri-o://63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" gracePeriod=604796 Mar 20 08:33:39 crc kubenswrapper[5136]: I0320 08:33:39.396843 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:39 crc kubenswrapper[5136]: E0320 08:33:39.397229 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:33:39 crc kubenswrapper[5136]: I0320 08:33:39.704120 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l9dsr" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" containerID="cri-o://f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999" gracePeriod=2 Mar 20 08:33:40 crc kubenswrapper[5136]: I0320 08:33:40.713166 5136 generic.go:334] "Generic (PLEG): container finished" podID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerID="f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999" exitCode=0 Mar 20 08:33:40 crc kubenswrapper[5136]: I0320 08:33:40.713257 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999"} Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.248857 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.353803 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") pod \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.353954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") pod \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.353983 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") pod \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\" (UID: \"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281\") " Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.355397 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities" (OuterVolumeSpecName: "utilities") pod "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" (UID: "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.360308 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt" (OuterVolumeSpecName: "kube-api-access-x9ltt") pod "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" (UID: "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281"). InnerVolumeSpecName "kube-api-access-x9ltt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.409104 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" (UID: "5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.457355 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.457425 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.457453 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9ltt\" (UniqueName: \"kubernetes.io/projected/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281-kube-api-access-x9ltt\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.721386 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l9dsr" event={"ID":"5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281","Type":"ContainerDied","Data":"74206438e8b8abc064083d451300813bdaef59c88deed5a10e498dc1f1c9554c"} Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.721433 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l9dsr" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.721439 5136 scope.go:117] "RemoveContainer" containerID="f3e6832d2cbe196dd11784002f038348557fa97e9a13d320c194d2a3db9f6999" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.738726 5136 scope.go:117] "RemoveContainer" containerID="2efc9186cba61829377c5c9c206d8e8991b7bb27d75aebe001c623815beef975" Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.752124 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.767612 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l9dsr"] Mar 20 08:33:41 crc kubenswrapper[5136]: I0320 08:33:41.776728 5136 scope.go:117] "RemoveContainer" containerID="2ec94eecde00968de015f346bb8ec607450091902b850fcfe7165fa270145bd4" Mar 20 08:33:42 crc kubenswrapper[5136]: I0320 08:33:42.406621 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" path="/var/lib/kubelet/pods/5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281/volumes" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.183033 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.266055 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.266514 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" containerID="cri-o://4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" gracePeriod=10 Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.689942 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.697389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") pod \"da7b3de9-906c-4470-9b45-498268d7161b\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.697432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") pod \"da7b3de9-906c-4470-9b45-498268d7161b\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.697519 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") pod \"da7b3de9-906c-4470-9b45-498268d7161b\" (UID: \"da7b3de9-906c-4470-9b45-498268d7161b\") " Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.702194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c" (OuterVolumeSpecName: "kube-api-access-r6w6c") pod "da7b3de9-906c-4470-9b45-498268d7161b" (UID: "da7b3de9-906c-4470-9b45-498268d7161b"). InnerVolumeSpecName "kube-api-access-r6w6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737456 5136 generic.go:334] "Generic (PLEG): container finished" podID="da7b3de9-906c-4470-9b45-498268d7161b" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" exitCode=0 Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerDied","Data":"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d"} Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" event={"ID":"da7b3de9-906c-4470-9b45-498268d7161b","Type":"ContainerDied","Data":"b875bc351439176b0ec46aee9af86021a8df8774ef7b812887178dc8835e12d7"} Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.737938 5136 scope.go:117] "RemoveContainer" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.738335 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-685785d49f-r6vtp" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.773444 5136 scope.go:117] "RemoveContainer" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.774158 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config" (OuterVolumeSpecName: "config") pod "da7b3de9-906c-4470-9b45-498268d7161b" (UID: "da7b3de9-906c-4470-9b45-498268d7161b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.779153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "da7b3de9-906c-4470-9b45-498268d7161b" (UID: "da7b3de9-906c-4470-9b45-498268d7161b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.790547 5136 scope.go:117] "RemoveContainer" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" Mar 20 08:33:43 crc kubenswrapper[5136]: E0320 08:33:43.790961 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d\": container with ID starting with 4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d not found: ID does not exist" containerID="4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.790992 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d"} err="failed to get container status \"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d\": rpc error: code = NotFound desc = could not find container \"4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d\": container with ID starting with 4ae3666cafb7fa634348658939e12b17295ac451d707af44c7a236580bd13a3d not found: ID does not exist" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.791013 5136 scope.go:117] "RemoveContainer" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" Mar 20 08:33:43 crc kubenswrapper[5136]: E0320 08:33:43.791207 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0\": container with ID starting with 7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0 not found: ID does not exist" containerID="7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.791243 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0"} err="failed to get container status \"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0\": rpc error: code = NotFound desc = could not find container \"7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0\": container with ID starting with 7f6641fccf5c97ba3e9c777aa43831b0b2400a29f6d3abc456305f538c1396c0 not found: ID does not exist" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.798720 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.798742 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6w6c\" (UniqueName: \"kubernetes.io/projected/da7b3de9-906c-4470-9b45-498268d7161b-kube-api-access-r6w6c\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:43 crc kubenswrapper[5136]: I0320 08:33:43.798753 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/da7b3de9-906c-4470-9b45-498268d7161b-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.072479 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.077611 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-685785d49f-r6vtp"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.278543 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.406707 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da7b3de9-906c-4470-9b45-498268d7161b" path="/var/lib/kubelet/pods/da7b3de9-906c-4470-9b45-498268d7161b/volumes" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.409466 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.409864 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410021 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410057 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410090 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410121 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410170 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410346 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.410976 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.411004 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") pod \"49b925de-d698-4589-9f71-cf485dd617d2\" (UID: \"49b925de-d698-4589-9f71-cf485dd617d2\") " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.411327 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.411719 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.412424 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.414861 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.414898 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info" (OuterVolumeSpecName: "pod-info") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.415060 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.418016 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp" (OuterVolumeSpecName: "kube-api-access-q5rbp") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "kube-api-access-q5rbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.423932 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.435144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (OuterVolumeSpecName: "persistence") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.436972 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data" (OuterVolumeSpecName: "config-data") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.457205 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf" (OuterVolumeSpecName: "server-conf") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.484030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "49b925de-d698-4589-9f71-cf485dd617d2" (UID: "49b925de-d698-4589-9f71-cf485dd617d2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513187 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513223 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/49b925de-d698-4589-9f71-cf485dd617d2-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513233 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/49b925de-d698-4589-9f71-cf485dd617d2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513242 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513281 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" " Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513294 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513302 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513311 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513320 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5rbp\" (UniqueName: \"kubernetes.io/projected/49b925de-d698-4589-9f71-cf485dd617d2-kube-api-access-q5rbp\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.513328 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/49b925de-d698-4589-9f71-cf485dd617d2-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.528532 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.528688 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0") on node "crc" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.615147 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745255 5136 generic.go:334] "Generic (PLEG): container finished" podID="49b925de-d698-4589-9f71-cf485dd617d2" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" exitCode=0 Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745351 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745859 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerDied","Data":"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682"} Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745906 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"49b925de-d698-4589-9f71-cf485dd617d2","Type":"ContainerDied","Data":"e183efadd223541a56fac8eea2671471e01e5264c953e53547b41494176cef67"} Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.745921 5136 scope.go:117] "RemoveContainer" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.767572 5136 scope.go:117] "RemoveContainer" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.776974 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.782796 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.804649 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805094 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-utilities" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805130 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-utilities" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805146 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-content" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805152 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="extract-content" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805170 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="setup-container" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805175 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="setup-container" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805187 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="init" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805212 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="init" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805234 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805239 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805250 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805256 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.805266 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805290 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805451 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d67c5ab-0096-4c4a-aaf1-f7b2e0ea2281" containerName="registry-server" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805466 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="49b925de-d698-4589-9f71-cf485dd617d2" containerName="rabbitmq" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.805474 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="da7b3de9-906c-4470-9b45-498268d7161b" containerName="dnsmasq-dns" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.806444 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.811492 5136 scope.go:117] "RemoveContainer" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.812200 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682\": container with ID starting with 112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682 not found: ID does not exist" containerID="112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.812303 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682"} err="failed to get container status \"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682\": rpc error: code = NotFound desc = could not find container \"112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682\": container with ID starting with 112b570f6ea3b0c9bd8b688662b38240a74ac9d49475a251881696032ace1682 not found: ID does not exist" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.812356 5136 scope.go:117] "RemoveContainer" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" Mar 20 08:33:44 crc kubenswrapper[5136]: E0320 08:33:44.812853 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e\": container with ID starting with ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e not found: ID does not exist" containerID="ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.812884 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e"} err="failed to get container status \"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e\": rpc error: code = NotFound desc = could not find container \"ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e\": container with ID starting with ce615ae1b073a852fb2f8618428d9305c2f1f4a5211b5abec9f3978f7dd4129e not found: ID does not exist" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.817764 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818246 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818413 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818464 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818639 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818671 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818643 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hxrcr" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.818801 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918850 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918951 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.918974 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919013 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919064 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919092 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:44 crc kubenswrapper[5136]: I0320 08:33:44.919194 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021011 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021088 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021113 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021137 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021210 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021238 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021871 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021934 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.021975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022079 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022487 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022481 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.022626 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.023296 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.024475 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.025297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.025428 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.026482 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.027128 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.027195 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ffa5bed1f53993894ec26cbc3fe1cf1f67f60a4766508e053a6d4d74251ebc8b/globalmount\"" pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.035559 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.042752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.057324 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"rabbitmq-server-0\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.167592 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.281473 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.426979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427334 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427449 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427513 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427537 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427835 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427881 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.427905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") pod \"144d1953-0072-4346-9aa6-83afc44fdb3b\" (UID: \"144d1953-0072-4346-9aa6-83afc44fdb3b\") " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.429689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.429939 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.430053 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.432086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.434926 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.436862 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info" (OuterVolumeSpecName: "pod-info") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.438840 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp" (OuterVolumeSpecName: "kube-api-access-22ffp") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "kube-api-access-22ffp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.448754 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (OuterVolumeSpecName: "persistence") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.457790 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data" (OuterVolumeSpecName: "config-data") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.475311 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf" (OuterVolumeSpecName: "server-conf") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530026 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ffp\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-kube-api-access-22ffp\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530070 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" " Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530084 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/144d1953-0072-4346-9aa6-83afc44fdb3b-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530095 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530105 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530114 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530125 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/144d1953-0072-4346-9aa6-83afc44fdb3b-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530135 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530143 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/144d1953-0072-4346-9aa6-83afc44fdb3b-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.530151 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.545213 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "144d1953-0072-4346-9aa6-83afc44fdb3b" (UID: "144d1953-0072-4346-9aa6-83afc44fdb3b"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.545307 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.545452 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e") on node "crc" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.631469 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/144d1953-0072-4346-9aa6-83afc44fdb3b-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.631506 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:45 crc kubenswrapper[5136]: W0320 08:33:45.656200 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804d1bff_7c63_45a1_bf1a_68f3eedb6ac7.slice/crio-2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933 WatchSource:0}: Error finding container 2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933: Status 404 returned error can't find the container with id 2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933 Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.658625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757360 5136 generic.go:334] "Generic (PLEG): container finished" podID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" exitCode=0 Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757416 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757476 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerDied","Data":"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef"} Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"144d1953-0072-4346-9aa6-83afc44fdb3b","Type":"ContainerDied","Data":"7392b8c85d117a71e5c4a2c47ce52f8f48e947a5928969391300b523dfc80f5d"} Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.757581 5136 scope.go:117] "RemoveContainer" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.764045 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerStarted","Data":"2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933"} Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.793284 5136 scope.go:117] "RemoveContainer" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.794214 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.800650 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.817903 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.818194 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.818209 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.818240 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="setup-container" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.818248 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="setup-container" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.818374 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" containerName="rabbitmq" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.819026 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.820858 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821002 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821029 5136 scope.go:117] "RemoveContainer" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821244 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rxs59" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.821296 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.822005 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef\": container with ID starting with 63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef not found: ID does not exist" containerID="63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.822058 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef"} err="failed to get container status \"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef\": rpc error: code = NotFound desc = could not find container \"63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef\": container with ID starting with 63021d009c2e3f21f5efaffbf79f616d34212441ac2e48211dcf9feccecac0ef not found: ID does not exist" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.822080 5136 scope.go:117] "RemoveContainer" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" Mar 20 08:33:45 crc kubenswrapper[5136]: E0320 08:33:45.822427 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f\": container with ID starting with 35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f not found: ID does not exist" containerID="35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.822449 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f"} err="failed to get container status \"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f\": rpc error: code = NotFound desc = could not find container \"35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f\": container with ID starting with 35c4b0791e9a204a171a31520811fac746150d8bfc15bd254e254723260ddd2f not found: ID does not exist" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.825304 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.825417 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.825578 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.869932 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934666 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934715 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934762 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.934977 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935323 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935713 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935883 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.935988 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:45 crc kubenswrapper[5136]: I0320 08:33:45.936121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038002 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038075 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038157 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038254 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038337 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038392 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038446 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038486 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.038980 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.039099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.039544 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.039893 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.040304 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.040918 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.041183 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d0b8d36279754dae866b74592d574f198fefe86644de71828fcab427244d57e1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.043926 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.044096 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.044162 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.048296 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.055630 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.072577 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"rabbitmq-cell1-server-0\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.150427 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.406181 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="144d1953-0072-4346-9aa6-83afc44fdb3b" path="/var/lib/kubelet/pods/144d1953-0072-4346-9aa6-83afc44fdb3b/volumes" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.407105 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49b925de-d698-4589-9f71-cf485dd617d2" path="/var/lib/kubelet/pods/49b925de-d698-4589-9f71-cf485dd617d2/volumes" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.492288 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.718389 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.769590 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.794985 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerStarted","Data":"25cfdad0d21b0236b303f371c98360bf9fc45a61724844374d71ad0ec3fcc738"} Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.799993 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerStarted","Data":"9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d"} Mar 20 08:33:46 crc kubenswrapper[5136]: I0320 08:33:46.950512 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:47 crc kubenswrapper[5136]: I0320 08:33:47.806078 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-87czr" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" containerID="cri-o://c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" gracePeriod=2 Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.280483 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.374229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") pod \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.374394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") pod \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.374426 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") pod \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\" (UID: \"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1\") " Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.375304 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities" (OuterVolumeSpecName: "utilities") pod "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" (UID: "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.382781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp" (OuterVolumeSpecName: "kube-api-access-dddvp") pod "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" (UID: "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1"). InnerVolumeSpecName "kube-api-access-dddvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.476592 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.476620 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dddvp\" (UniqueName: \"kubernetes.io/projected/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-kube-api-access-dddvp\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.525244 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" (UID: "c0517cd7-ad7f-4547-9d3e-df34e8cf61f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.577783 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.815580 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerStarted","Data":"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9"} Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818579 5136 generic.go:334] "Generic (PLEG): container finished" podID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" exitCode=0 Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5"} Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818661 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-87czr" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-87czr" event={"ID":"c0517cd7-ad7f-4547-9d3e-df34e8cf61f1","Type":"ContainerDied","Data":"56b0c4c299b46691e68d61c887ac0f6c17c1ba615dd505f5078a7f25fba0c4ba"} Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.818702 5136 scope.go:117] "RemoveContainer" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.843202 5136 scope.go:117] "RemoveContainer" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.889011 5136 scope.go:117] "RemoveContainer" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.889239 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.895117 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-87czr"] Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.907592 5136 scope.go:117] "RemoveContainer" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" Mar 20 08:33:48 crc kubenswrapper[5136]: E0320 08:33:48.908031 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5\": container with ID starting with c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5 not found: ID does not exist" containerID="c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908066 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5"} err="failed to get container status \"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5\": rpc error: code = NotFound desc = could not find container \"c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5\": container with ID starting with c7808028bff61733663b4a106c6026f91b687eff928a7de2f1aa7454f92490c5 not found: ID does not exist" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908092 5136 scope.go:117] "RemoveContainer" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" Mar 20 08:33:48 crc kubenswrapper[5136]: E0320 08:33:48.908382 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175\": container with ID starting with 31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175 not found: ID does not exist" containerID="31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908411 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175"} err="failed to get container status \"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175\": rpc error: code = NotFound desc = could not find container \"31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175\": container with ID starting with 31bac2054e2ae1beb6d4ff79d77175c41e16c094e8908c107001af184ff07175 not found: ID does not exist" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908430 5136 scope.go:117] "RemoveContainer" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" Mar 20 08:33:48 crc kubenswrapper[5136]: E0320 08:33:48.908785 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a\": container with ID starting with d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a not found: ID does not exist" containerID="d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a" Mar 20 08:33:48 crc kubenswrapper[5136]: I0320 08:33:48.908811 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a"} err="failed to get container status \"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a\": rpc error: code = NotFound desc = could not find container \"d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a\": container with ID starting with d8fb00431917424824544a4397bdff29554efbd29f162120fbbe466b8949016a not found: ID does not exist" Mar 20 08:33:50 crc kubenswrapper[5136]: I0320 08:33:50.409937 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" path="/var/lib/kubelet/pods/c0517cd7-ad7f-4547-9d3e-df34e8cf61f1/volumes" Mar 20 08:33:51 crc kubenswrapper[5136]: I0320 08:33:51.397477 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:33:51 crc kubenswrapper[5136]: E0320 08:33:51.397915 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.185405 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:34:00 crc kubenswrapper[5136]: E0320 08:34:00.186490 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-content" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.186519 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-content" Mar 20 08:34:00 crc kubenswrapper[5136]: E0320 08:34:00.186588 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.186601 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" Mar 20 08:34:00 crc kubenswrapper[5136]: E0320 08:34:00.186616 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-utilities" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.186626 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="extract-utilities" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.187210 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0517cd7-ad7f-4547-9d3e-df34e8cf61f1" containerName="registry-server" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.188393 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.199767 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.200366 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.201251 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.202348 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.290142 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"auto-csr-approver-29566594-shcj9\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.393135 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"auto-csr-approver-29566594-shcj9\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.415903 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"auto-csr-approver-29566594-shcj9\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.531575 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:00 crc kubenswrapper[5136]: I0320 08:34:00.995410 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:34:01 crc kubenswrapper[5136]: W0320 08:34:01.004510 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod948b6ddf_f1f2_46ef_9d9f_1e07c71f593e.slice/crio-5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c WatchSource:0}: Error finding container 5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c: Status 404 returned error can't find the container with id 5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c Mar 20 08:34:01 crc kubenswrapper[5136]: I0320 08:34:01.937333 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566594-shcj9" event={"ID":"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e","Type":"ContainerStarted","Data":"5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c"} Mar 20 08:34:02 crc kubenswrapper[5136]: I0320 08:34:02.946033 5136 generic.go:334] "Generic (PLEG): container finished" podID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerID="46b6ff50442d3c65cf954ac428d83a34b6951bd632d4aff5243b9fdb9f413511" exitCode=0 Mar 20 08:34:02 crc kubenswrapper[5136]: I0320 08:34:02.946247 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566594-shcj9" event={"ID":"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e","Type":"ContainerDied","Data":"46b6ff50442d3c65cf954ac428d83a34b6951bd632d4aff5243b9fdb9f413511"} Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.240654 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.347170 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") pod \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\" (UID: \"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e\") " Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.352850 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww" (OuterVolumeSpecName: "kube-api-access-znhww") pod "948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" (UID: "948b6ddf-f1f2-46ef-9d9f-1e07c71f593e"). InnerVolumeSpecName "kube-api-access-znhww". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.397192 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:04 crc kubenswrapper[5136]: E0320 08:34:04.397408 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.449163 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znhww\" (UniqueName: \"kubernetes.io/projected/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e-kube-api-access-znhww\") on node \"crc\" DevicePath \"\"" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.960799 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566594-shcj9" event={"ID":"948b6ddf-f1f2-46ef-9d9f-1e07c71f593e","Type":"ContainerDied","Data":"5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c"} Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.960869 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566594-shcj9" Mar 20 08:34:04 crc kubenswrapper[5136]: I0320 08:34:04.960876 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b14e624362946912d41fa2233e52cb4f1df14e8ac842acd495bed493956f50c" Mar 20 08:34:05 crc kubenswrapper[5136]: I0320 08:34:05.323977 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:34:05 crc kubenswrapper[5136]: I0320 08:34:05.328550 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566588-jjbzp"] Mar 20 08:34:06 crc kubenswrapper[5136]: I0320 08:34:06.405309 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca2b685a-cbe9-4989-87d2-09c8c1b3a846" path="/var/lib/kubelet/pods/ca2b685a-cbe9-4989-87d2-09c8c1b3a846/volumes" Mar 20 08:34:19 crc kubenswrapper[5136]: E0320 08:34:19.009412 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804d1bff_7c63_45a1_bf1a_68f3eedb6ac7.slice/crio-9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod804d1bff_7c63_45a1_bf1a_68f3eedb6ac7.slice/crio-conmon-9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:34:19 crc kubenswrapper[5136]: I0320 08:34:19.082397 5136 generic.go:334] "Generic (PLEG): container finished" podID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerID="9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d" exitCode=0 Mar 20 08:34:19 crc kubenswrapper[5136]: I0320 08:34:19.082511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerDied","Data":"9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d"} Mar 20 08:34:19 crc kubenswrapper[5136]: I0320 08:34:19.398911 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:19 crc kubenswrapper[5136]: E0320 08:34:19.399259 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:20 crc kubenswrapper[5136]: I0320 08:34:20.091476 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerStarted","Data":"55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911"} Mar 20 08:34:20 crc kubenswrapper[5136]: I0320 08:34:20.092017 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Mar 20 08:34:20 crc kubenswrapper[5136]: I0320 08:34:20.120500 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.120473576 podStartE2EDuration="36.120473576s" podCreationTimestamp="2026-03-20 08:33:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:20.111700094 +0000 UTC m=+6292.371011275" watchObservedRunningTime="2026-03-20 08:34:20.120473576 +0000 UTC m=+6292.379784767" Mar 20 08:34:21 crc kubenswrapper[5136]: I0320 08:34:21.102228 5136 generic.go:334] "Generic (PLEG): container finished" podID="e2c9ab46-3143-4472-a606-cd75def78f41" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" exitCode=0 Mar 20 08:34:21 crc kubenswrapper[5136]: I0320 08:34:21.102284 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerDied","Data":"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9"} Mar 20 08:34:22 crc kubenswrapper[5136]: I0320 08:34:22.113302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerStarted","Data":"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80"} Mar 20 08:34:22 crc kubenswrapper[5136]: I0320 08:34:22.113749 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:34:22 crc kubenswrapper[5136]: I0320 08:34:22.143768 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.143739294 podStartE2EDuration="37.143739294s" podCreationTimestamp="2026-03-20 08:33:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:34:22.140498133 +0000 UTC m=+6294.399809294" watchObservedRunningTime="2026-03-20 08:34:22.143739294 +0000 UTC m=+6294.403050485" Mar 20 08:34:34 crc kubenswrapper[5136]: I0320 08:34:34.397270 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:34 crc kubenswrapper[5136]: E0320 08:34:34.398180 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:35 crc kubenswrapper[5136]: I0320 08:34:35.171069 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Mar 20 08:34:36 crc kubenswrapper[5136]: I0320 08:34:36.153015 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Mar 20 08:34:37 crc kubenswrapper[5136]: I0320 08:34:37.618920 5136 scope.go:117] "RemoveContainer" containerID="0bdf2244928c50e418739f666f637d0c122d85d20e0278df3b68b937bca89d79" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.689480 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 20 08:34:39 crc kubenswrapper[5136]: E0320 08:34:39.690579 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerName="oc" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.690597 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerName="oc" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.690799 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" containerName="oc" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.691454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.696354 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k465q" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.746944 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.778910 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"mariadb-client\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " pod="openstack/mariadb-client" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.880521 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"mariadb-client\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " pod="openstack/mariadb-client" Mar 20 08:34:39 crc kubenswrapper[5136]: I0320 08:34:39.898152 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"mariadb-client\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " pod="openstack/mariadb-client" Mar 20 08:34:40 crc kubenswrapper[5136]: I0320 08:34:40.063031 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:34:40 crc kubenswrapper[5136]: I0320 08:34:40.578481 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:34:41 crc kubenswrapper[5136]: I0320 08:34:41.266605 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerStarted","Data":"e1e18e45f51189c0abfd018ef596ef0adb8f22c53a722e833c202dc76c200ff9"} Mar 20 08:34:45 crc kubenswrapper[5136]: I0320 08:34:45.318805 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerStarted","Data":"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109"} Mar 20 08:34:45 crc kubenswrapper[5136]: I0320 08:34:45.338178 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-client" podStartSLOduration=2.405964972 podStartE2EDuration="6.338157059s" podCreationTimestamp="2026-03-20 08:34:39 +0000 UTC" firstStartedPulling="2026-03-20 08:34:40.588279311 +0000 UTC m=+6312.847590462" lastFinishedPulling="2026-03-20 08:34:44.520471378 +0000 UTC m=+6316.779782549" observedRunningTime="2026-03-20 08:34:45.334235057 +0000 UTC m=+6317.593546248" watchObservedRunningTime="2026-03-20 08:34:45.338157059 +0000 UTC m=+6317.597468220" Mar 20 08:34:47 crc kubenswrapper[5136]: I0320 08:34:47.397281 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:47 crc kubenswrapper[5136]: E0320 08:34:47.397839 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:34:58 crc kubenswrapper[5136]: I0320 08:34:58.402730 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:34:58 crc kubenswrapper[5136]: E0320 08:34:58.404399 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:00 crc kubenswrapper[5136]: I0320 08:35:00.337150 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:35:00 crc kubenswrapper[5136]: I0320 08:35:00.338294 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-client" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" containerID="cri-o://ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" gracePeriod=30 Mar 20 08:35:00 crc kubenswrapper[5136]: I0320 08:35:00.849807 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.017213 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") pod \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\" (UID: \"0be956a9-9d09-4611-9ac2-47c5f7e43adb\") " Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.024087 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59" (OuterVolumeSpecName: "kube-api-access-bvb59") pod "0be956a9-9d09-4611-9ac2-47c5f7e43adb" (UID: "0be956a9-9d09-4611-9ac2-47c5f7e43adb"). InnerVolumeSpecName "kube-api-access-bvb59". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.119601 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvb59\" (UniqueName: \"kubernetes.io/projected/0be956a9-9d09-4611-9ac2-47c5f7e43adb-kube-api-access-bvb59\") on node \"crc\" DevicePath \"\"" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444693 5136 generic.go:334] "Generic (PLEG): container finished" podID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" exitCode=143 Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerDied","Data":"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109"} Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444798 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"0be956a9-9d09-4611-9ac2-47c5f7e43adb","Type":"ContainerDied","Data":"e1e18e45f51189c0abfd018ef596ef0adb8f22c53a722e833c202dc76c200ff9"} Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444752 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.444839 5136 scope.go:117] "RemoveContainer" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.496002 5136 scope.go:117] "RemoveContainer" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.496046 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:35:01 crc kubenswrapper[5136]: E0320 08:35:01.496547 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109\": container with ID starting with ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109 not found: ID does not exist" containerID="ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.496623 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109"} err="failed to get container status \"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109\": rpc error: code = NotFound desc = could not find container \"ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109\": container with ID starting with ffd0e84b1d9104eb6cd435f607386cbd9644b59a31416f3a0d6b09f458561109 not found: ID does not exist" Mar 20 08:35:01 crc kubenswrapper[5136]: I0320 08:35:01.501428 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:35:02 crc kubenswrapper[5136]: I0320 08:35:02.408212 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" path="/var/lib/kubelet/pods/0be956a9-9d09-4611-9ac2-47c5f7e43adb/volumes" Mar 20 08:35:12 crc kubenswrapper[5136]: I0320 08:35:12.397660 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:12 crc kubenswrapper[5136]: E0320 08:35:12.398473 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:23 crc kubenswrapper[5136]: I0320 08:35:23.396701 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:23 crc kubenswrapper[5136]: E0320 08:35:23.397195 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:37 crc kubenswrapper[5136]: I0320 08:35:37.397255 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:37 crc kubenswrapper[5136]: E0320 08:35:37.398434 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:35:50 crc kubenswrapper[5136]: I0320 08:35:50.396675 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:35:50 crc kubenswrapper[5136]: E0320 08:35:50.397613 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.141583 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:36:00 crc kubenswrapper[5136]: E0320 08:36:00.142497 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.142514 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.142700 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be956a9-9d09-4611-9ac2-47c5f7e43adb" containerName="mariadb-client" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.143362 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.146079 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.146419 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.146631 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.153082 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.299400 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"auto-csr-approver-29566596-npcd2\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.400967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"auto-csr-approver-29566596-npcd2\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.425622 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.427286 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.427399 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.432559 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"auto-csr-approver-29566596-npcd2\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.465031 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.502092 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.502133 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.502255 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.603748 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.603803 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.603835 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.604744 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.604965 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.622689 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"redhat-marketplace-7mclm\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.779734 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.880118 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:36:00 crc kubenswrapper[5136]: I0320 08:36:00.920636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566596-npcd2" event={"ID":"57673048-5103-4b04-8ef3-777cb1a33601","Type":"ContainerStarted","Data":"25f19349296af7f41f743deb68f99919c31d1985b70314e7f931b4a5e5efad4c"} Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.230511 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:01 crc kubenswrapper[5136]: W0320 08:36:01.232795 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod629a83e8_57da_42f6_b4f5_b7389a04f960.slice/crio-44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7 WatchSource:0}: Error finding container 44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7: Status 404 returned error can't find the container with id 44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7 Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.928876 5136 generic.go:334] "Generic (PLEG): container finished" podID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" exitCode=0 Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.928928 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8"} Mar 20 08:36:01 crc kubenswrapper[5136]: I0320 08:36:01.929124 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerStarted","Data":"44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7"} Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.937536 5136 generic.go:334] "Generic (PLEG): container finished" podID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" exitCode=0 Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.937590 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d"} Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.939370 5136 generic.go:334] "Generic (PLEG): container finished" podID="57673048-5103-4b04-8ef3-777cb1a33601" containerID="ef67e23c79ff3a82593be1acaca432453adbd354cb50446c81640884957e8ffe" exitCode=0 Mar 20 08:36:02 crc kubenswrapper[5136]: I0320 08:36:02.939411 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566596-npcd2" event={"ID":"57673048-5103-4b04-8ef3-777cb1a33601","Type":"ContainerDied","Data":"ef67e23c79ff3a82593be1acaca432453adbd354cb50446c81640884957e8ffe"} Mar 20 08:36:03 crc kubenswrapper[5136]: I0320 08:36:03.947337 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerStarted","Data":"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc"} Mar 20 08:36:03 crc kubenswrapper[5136]: I0320 08:36:03.966070 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7mclm" podStartSLOduration=2.304934716 podStartE2EDuration="3.966052407s" podCreationTimestamp="2026-03-20 08:36:00 +0000 UTC" firstStartedPulling="2026-03-20 08:36:01.935974368 +0000 UTC m=+6394.195285519" lastFinishedPulling="2026-03-20 08:36:03.597092059 +0000 UTC m=+6395.856403210" observedRunningTime="2026-03-20 08:36:03.964950262 +0000 UTC m=+6396.224261423" watchObservedRunningTime="2026-03-20 08:36:03.966052407 +0000 UTC m=+6396.225363558" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.250996 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.357047 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") pod \"57673048-5103-4b04-8ef3-777cb1a33601\" (UID: \"57673048-5103-4b04-8ef3-777cb1a33601\") " Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.363131 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8" (OuterVolumeSpecName: "kube-api-access-5hsx8") pod "57673048-5103-4b04-8ef3-777cb1a33601" (UID: "57673048-5103-4b04-8ef3-777cb1a33601"). InnerVolumeSpecName "kube-api-access-5hsx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.396504 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:36:04 crc kubenswrapper[5136]: E0320 08:36:04.396823 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.459448 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hsx8\" (UniqueName: \"kubernetes.io/projected/57673048-5103-4b04-8ef3-777cb1a33601-kube-api-access-5hsx8\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.956956 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566596-npcd2" Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.956973 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566596-npcd2" event={"ID":"57673048-5103-4b04-8ef3-777cb1a33601","Type":"ContainerDied","Data":"25f19349296af7f41f743deb68f99919c31d1985b70314e7f931b4a5e5efad4c"} Mar 20 08:36:04 crc kubenswrapper[5136]: I0320 08:36:04.956999 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25f19349296af7f41f743deb68f99919c31d1985b70314e7f931b4a5e5efad4c" Mar 20 08:36:05 crc kubenswrapper[5136]: I0320 08:36:05.319933 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:36:05 crc kubenswrapper[5136]: I0320 08:36:05.325055 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566590-9pt5f"] Mar 20 08:36:06 crc kubenswrapper[5136]: I0320 08:36:06.413004 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c8efa5-d30c-4426-ad6e-4aa0880c0563" path="/var/lib/kubelet/pods/51c8efa5-d30c-4426-ad6e-4aa0880c0563/volumes" Mar 20 08:36:10 crc kubenswrapper[5136]: I0320 08:36:10.780325 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:10 crc kubenswrapper[5136]: I0320 08:36:10.782790 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:10 crc kubenswrapper[5136]: I0320 08:36:10.857408 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:11 crc kubenswrapper[5136]: I0320 08:36:11.045871 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:11 crc kubenswrapper[5136]: I0320 08:36:11.095737 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.012581 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7mclm" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" containerID="cri-o://afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" gracePeriod=2 Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.398595 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.496485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") pod \"629a83e8-57da-42f6-b4f5-b7389a04f960\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.496655 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") pod \"629a83e8-57da-42f6-b4f5-b7389a04f960\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.498983 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") pod \"629a83e8-57da-42f6-b4f5-b7389a04f960\" (UID: \"629a83e8-57da-42f6-b4f5-b7389a04f960\") " Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.499925 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities" (OuterVolumeSpecName: "utilities") pod "629a83e8-57da-42f6-b4f5-b7389a04f960" (UID: "629a83e8-57da-42f6-b4f5-b7389a04f960"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.503508 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79" (OuterVolumeSpecName: "kube-api-access-xqz79") pod "629a83e8-57da-42f6-b4f5-b7389a04f960" (UID: "629a83e8-57da-42f6-b4f5-b7389a04f960"). InnerVolumeSpecName "kube-api-access-xqz79". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.601410 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqz79\" (UniqueName: \"kubernetes.io/projected/629a83e8-57da-42f6-b4f5-b7389a04f960-kube-api-access-xqz79\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.601890 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:13 crc kubenswrapper[5136]: I0320 08:36:13.963987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "629a83e8-57da-42f6-b4f5-b7389a04f960" (UID: "629a83e8-57da-42f6-b4f5-b7389a04f960"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.007399 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/629a83e8-57da-42f6-b4f5-b7389a04f960-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.023750 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mclm" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.023751 5136 generic.go:334] "Generic (PLEG): container finished" podID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" exitCode=0 Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.023780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc"} Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.024872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mclm" event={"ID":"629a83e8-57da-42f6-b4f5-b7389a04f960","Type":"ContainerDied","Data":"44696757fc71a063d2a7087406b0d5d06b985688fd1ca0f48622a56669ea41e7"} Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.024895 5136 scope.go:117] "RemoveContainer" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.043747 5136 scope.go:117] "RemoveContainer" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.059190 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.064461 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mclm"] Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.090386 5136 scope.go:117] "RemoveContainer" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.113531 5136 scope.go:117] "RemoveContainer" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" Mar 20 08:36:14 crc kubenswrapper[5136]: E0320 08:36:14.113989 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc\": container with ID starting with afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc not found: ID does not exist" containerID="afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114036 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc"} err="failed to get container status \"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc\": rpc error: code = NotFound desc = could not find container \"afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc\": container with ID starting with afbb7d6be8f99ea06340961404cf993557459181d7e79e92176b9d951b243cbc not found: ID does not exist" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114066 5136 scope.go:117] "RemoveContainer" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" Mar 20 08:36:14 crc kubenswrapper[5136]: E0320 08:36:14.114495 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d\": container with ID starting with b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d not found: ID does not exist" containerID="b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114542 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d"} err="failed to get container status \"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d\": rpc error: code = NotFound desc = could not find container \"b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d\": container with ID starting with b630a65a6e3fb4516f452491782d3614b70dbdac3b68589fb07d278c4e61425d not found: ID does not exist" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114577 5136 scope.go:117] "RemoveContainer" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" Mar 20 08:36:14 crc kubenswrapper[5136]: E0320 08:36:14.114857 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8\": container with ID starting with c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8 not found: ID does not exist" containerID="c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.114885 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8"} err="failed to get container status \"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8\": rpc error: code = NotFound desc = could not find container \"c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8\": container with ID starting with c0ac90e3e6b7ed61f84f92ffaa9c25b7f6d6becc91e8e18922d43f39ad0aa9c8 not found: ID does not exist" Mar 20 08:36:14 crc kubenswrapper[5136]: I0320 08:36:14.408959 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" path="/var/lib/kubelet/pods/629a83e8-57da-42f6-b4f5-b7389a04f960/volumes" Mar 20 08:36:19 crc kubenswrapper[5136]: I0320 08:36:19.396870 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:36:20 crc kubenswrapper[5136]: I0320 08:36:20.066167 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271"} Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.808884 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810039 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-content" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810067 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-content" Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810093 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57673048-5103-4b04-8ef3-777cb1a33601" containerName="oc" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810106 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="57673048-5103-4b04-8ef3-777cb1a33601" containerName="oc" Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810154 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810169 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" Mar 20 08:36:33 crc kubenswrapper[5136]: E0320 08:36:33.810188 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-utilities" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810201 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="extract-utilities" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810507 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="629a83e8-57da-42f6-b4f5-b7389a04f960" containerName="registry-server" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.810545 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="57673048-5103-4b04-8ef3-777cb1a33601" containerName="oc" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.812510 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.819228 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.902841 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.902904 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:33 crc kubenswrapper[5136]: I0320 08:36:33.902944 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.003994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004073 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004642 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.004653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.023085 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"community-operators-5wtjq\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.169553 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:34 crc kubenswrapper[5136]: I0320 08:36:34.662065 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:35 crc kubenswrapper[5136]: I0320 08:36:35.176667 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerID="6c3988e24586b17da796a7ef157d6d1682c289a56b9f4d8591c3323366fe474a" exitCode=0 Mar 20 08:36:35 crc kubenswrapper[5136]: I0320 08:36:35.176713 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"6c3988e24586b17da796a7ef157d6d1682c289a56b9f4d8591c3323366fe474a"} Mar 20 08:36:35 crc kubenswrapper[5136]: I0320 08:36:35.176738 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerStarted","Data":"e704de3f8b18e2cfcfbf126c8879ca20f897587bc167a14a1dfd40dcd96c29db"} Mar 20 08:36:36 crc kubenswrapper[5136]: I0320 08:36:36.190659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerStarted","Data":"06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231"} Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.199874 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerID="06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231" exitCode=0 Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.200013 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231"} Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.821211 5136 scope.go:117] "RemoveContainer" containerID="d67dfe1060ac0ac0db1818a3ab60ffceda0123c6ffe3b59b89e0430a3ae809a2" Mar 20 08:36:37 crc kubenswrapper[5136]: I0320 08:36:37.898951 5136 scope.go:117] "RemoveContainer" containerID="a84d841fa14dbb7d163049ae2a42d3d241fc2e9ace22731699a4238f410674cb" Mar 20 08:36:38 crc kubenswrapper[5136]: I0320 08:36:38.211088 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerStarted","Data":"0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57"} Mar 20 08:36:38 crc kubenswrapper[5136]: I0320 08:36:38.254560 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wtjq" podStartSLOduration=2.691861855 podStartE2EDuration="5.254539798s" podCreationTimestamp="2026-03-20 08:36:33 +0000 UTC" firstStartedPulling="2026-03-20 08:36:35.179042524 +0000 UTC m=+6427.438353675" lastFinishedPulling="2026-03-20 08:36:37.741720427 +0000 UTC m=+6430.001031618" observedRunningTime="2026-03-20 08:36:38.243482235 +0000 UTC m=+6430.502793416" watchObservedRunningTime="2026-03-20 08:36:38.254539798 +0000 UTC m=+6430.513850959" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.170214 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.170765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.259538 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:44 crc kubenswrapper[5136]: I0320 08:36:44.301848 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:47 crc kubenswrapper[5136]: I0320 08:36:47.779888 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:47 crc kubenswrapper[5136]: I0320 08:36:47.780342 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wtjq" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" containerID="cri-o://0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57" gracePeriod=2 Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.299261 5136 generic.go:334] "Generic (PLEG): container finished" podID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerID="0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57" exitCode=0 Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.299625 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57"} Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.466992 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.538954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") pod \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.539016 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") pod \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.539151 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") pod \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\" (UID: \"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7\") " Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.539679 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities" (OuterVolumeSpecName: "utilities") pod "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" (UID: "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.545488 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj" (OuterVolumeSpecName: "kube-api-access-mkvfj") pod "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" (UID: "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7"). InnerVolumeSpecName "kube-api-access-mkvfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.587766 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" (UID: "6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.640291 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.642785 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:48 crc kubenswrapper[5136]: I0320 08:36:48.642807 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvfj\" (UniqueName: \"kubernetes.io/projected/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7-kube-api-access-mkvfj\") on node \"crc\" DevicePath \"\"" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.309774 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wtjq" event={"ID":"6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7","Type":"ContainerDied","Data":"e704de3f8b18e2cfcfbf126c8879ca20f897587bc167a14a1dfd40dcd96c29db"} Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.309973 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wtjq" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.310185 5136 scope.go:117] "RemoveContainer" containerID="0b9725a1a007e9e06dc65c9de35ec11e1988db24ddf19ad259d83257a7e02b57" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.333103 5136 scope.go:117] "RemoveContainer" containerID="06b1f730aa1414898c07cbc6c51c6d45af0404b9bfb57fe9ffcd166953c77231" Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.358006 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.361595 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wtjq"] Mar 20 08:36:49 crc kubenswrapper[5136]: I0320 08:36:49.375912 5136 scope.go:117] "RemoveContainer" containerID="6c3988e24586b17da796a7ef157d6d1682c289a56b9f4d8591c3323366fe474a" Mar 20 08:36:50 crc kubenswrapper[5136]: I0320 08:36:50.410548 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" path="/var/lib/kubelet/pods/6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7/volumes" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.141930 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:38:00 crc kubenswrapper[5136]: E0320 08:38:00.142893 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.142907 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" Mar 20 08:38:00 crc kubenswrapper[5136]: E0320 08:38:00.142934 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-content" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.142940 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-content" Mar 20 08:38:00 crc kubenswrapper[5136]: E0320 08:38:00.142974 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-utilities" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.142982 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="extract-utilities" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.143165 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b98fbd5-e4c4-4ad3-831a-1f22f4cc99d7" containerName="registry-server" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.143977 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.146450 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.146713 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.154742 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.162619 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.281417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"auto-csr-approver-29566598-9zdgt\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.382585 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"auto-csr-approver-29566598-9zdgt\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.400670 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"auto-csr-approver-29566598-9zdgt\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.489545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.904851 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:38:00 crc kubenswrapper[5136]: W0320 08:38:00.911872 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9379207f_99bf_4561_8979_f27be8f510ac.slice/crio-13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6 WatchSource:0}: Error finding container 13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6: Status 404 returned error can't find the container with id 13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6 Mar 20 08:38:00 crc kubenswrapper[5136]: I0320 08:38:00.915855 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:38:01 crc kubenswrapper[5136]: I0320 08:38:01.877248 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" event={"ID":"9379207f-99bf-4561-8979-f27be8f510ac","Type":"ContainerStarted","Data":"13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6"} Mar 20 08:38:02 crc kubenswrapper[5136]: I0320 08:38:02.889168 5136 generic.go:334] "Generic (PLEG): container finished" podID="9379207f-99bf-4561-8979-f27be8f510ac" containerID="e29edc0f4375ac391060cb753f50bdb9915298f531076d0c17e85a24815a777f" exitCode=0 Mar 20 08:38:02 crc kubenswrapper[5136]: I0320 08:38:02.889211 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" event={"ID":"9379207f-99bf-4561-8979-f27be8f510ac","Type":"ContainerDied","Data":"e29edc0f4375ac391060cb753f50bdb9915298f531076d0c17e85a24815a777f"} Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.237051 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.339321 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") pod \"9379207f-99bf-4561-8979-f27be8f510ac\" (UID: \"9379207f-99bf-4561-8979-f27be8f510ac\") " Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.346995 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np" (OuterVolumeSpecName: "kube-api-access-7w4np") pod "9379207f-99bf-4561-8979-f27be8f510ac" (UID: "9379207f-99bf-4561-8979-f27be8f510ac"). InnerVolumeSpecName "kube-api-access-7w4np". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.441463 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7w4np\" (UniqueName: \"kubernetes.io/projected/9379207f-99bf-4561-8979-f27be8f510ac-kube-api-access-7w4np\") on node \"crc\" DevicePath \"\"" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.908375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" event={"ID":"9379207f-99bf-4561-8979-f27be8f510ac","Type":"ContainerDied","Data":"13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6"} Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.908415 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13f31f8a2f9dfaeb1bfc59a061ff92c88ca3411a1e165d7ebd7d4299ce5754c6" Mar 20 08:38:04 crc kubenswrapper[5136]: I0320 08:38:04.908458 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566598-9zdgt" Mar 20 08:38:05 crc kubenswrapper[5136]: I0320 08:38:05.306004 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:38:05 crc kubenswrapper[5136]: I0320 08:38:05.311577 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566592-gnh7d"] Mar 20 08:38:06 crc kubenswrapper[5136]: I0320 08:38:06.405370 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e2af690-159e-4938-b0b0-35e042cc8393" path="/var/lib/kubelet/pods/2e2af690-159e-4938-b0b0-35e042cc8393/volumes" Mar 20 08:38:38 crc kubenswrapper[5136]: I0320 08:38:38.046969 5136 scope.go:117] "RemoveContainer" containerID="6f1e73339774fdb849b7c14ca46c4e23637ecc11d975480f8593fb668065f9a0" Mar 20 08:38:45 crc kubenswrapper[5136]: I0320 08:38:45.821807 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:38:45 crc kubenswrapper[5136]: I0320 08:38:45.824436 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:39:15 crc kubenswrapper[5136]: I0320 08:39:15.821972 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:39:15 crc kubenswrapper[5136]: I0320 08:39:15.822552 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:39:38 crc kubenswrapper[5136]: I0320 08:39:38.115303 5136 scope.go:117] "RemoveContainer" containerID="513bfb357219a2477e16b62515cf153229315c47b91f7217842266ad20b33891" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.821528 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.822126 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.822175 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.823058 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:39:45 crc kubenswrapper[5136]: I0320 08:39:45.823114 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271" gracePeriod=600 Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.744634 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271" exitCode=0 Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.744674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271"} Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.745117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a"} Mar 20 08:39:46 crc kubenswrapper[5136]: I0320 08:39:46.745152 5136 scope.go:117] "RemoveContainer" containerID="d335cf61765db410fb80b1d844bb802e35ea05eca1ebb83a5203dc037e1589b5" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.145950 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:40:00 crc kubenswrapper[5136]: E0320 08:40:00.147013 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9379207f-99bf-4561-8979-f27be8f510ac" containerName="oc" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.147025 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9379207f-99bf-4561-8979-f27be8f510ac" containerName="oc" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.147185 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9379207f-99bf-4561-8979-f27be8f510ac" containerName="oc" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.147665 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.150152 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.150381 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.150505 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.165437 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.239020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"auto-csr-approver-29566600-2m6nn\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.340563 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"auto-csr-approver-29566600-2m6nn\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.361705 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"auto-csr-approver-29566600-2m6nn\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.512171 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.787414 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:40:00 crc kubenswrapper[5136]: I0320 08:40:00.887647 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" event={"ID":"3480cf66-9f91-4ce8-924c-0f730044c0de","Type":"ContainerStarted","Data":"ddc706e6d31258b0f9c34cf6b41b2730d0ba2080de3293fcc901fa16248cc62c"} Mar 20 08:40:02 crc kubenswrapper[5136]: I0320 08:40:02.905483 5136 generic.go:334] "Generic (PLEG): container finished" podID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerID="ada9ce7b7b306f2b5dbbf312318f5ac5adc2a593ce372df15119878b742a8edb" exitCode=0 Mar 20 08:40:02 crc kubenswrapper[5136]: I0320 08:40:02.905544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" event={"ID":"3480cf66-9f91-4ce8-924c-0f730044c0de","Type":"ContainerDied","Data":"ada9ce7b7b306f2b5dbbf312318f5ac5adc2a593ce372df15119878b742a8edb"} Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.209297 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.303249 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") pod \"3480cf66-9f91-4ce8-924c-0f730044c0de\" (UID: \"3480cf66-9f91-4ce8-924c-0f730044c0de\") " Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.308848 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw" (OuterVolumeSpecName: "kube-api-access-6qcnw") pod "3480cf66-9f91-4ce8-924c-0f730044c0de" (UID: "3480cf66-9f91-4ce8-924c-0f730044c0de"). InnerVolumeSpecName "kube-api-access-6qcnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.404881 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qcnw\" (UniqueName: \"kubernetes.io/projected/3480cf66-9f91-4ce8-924c-0f730044c0de-kube-api-access-6qcnw\") on node \"crc\" DevicePath \"\"" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.935831 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" event={"ID":"3480cf66-9f91-4ce8-924c-0f730044c0de","Type":"ContainerDied","Data":"ddc706e6d31258b0f9c34cf6b41b2730d0ba2080de3293fcc901fa16248cc62c"} Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.935868 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddc706e6d31258b0f9c34cf6b41b2730d0ba2080de3293fcc901fa16248cc62c" Mar 20 08:40:04 crc kubenswrapper[5136]: I0320 08:40:04.935994 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566600-2m6nn" Mar 20 08:40:05 crc kubenswrapper[5136]: I0320 08:40:05.309379 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:40:05 crc kubenswrapper[5136]: I0320 08:40:05.319753 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566594-shcj9"] Mar 20 08:40:06 crc kubenswrapper[5136]: I0320 08:40:06.405125 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="948b6ddf-f1f2-46ef-9d9f-1e07c71f593e" path="/var/lib/kubelet/pods/948b6ddf-f1f2-46ef-9d9f-1e07c71f593e/volumes" Mar 20 08:40:38 crc kubenswrapper[5136]: I0320 08:40:38.164481 5136 scope.go:117] "RemoveContainer" containerID="46b6ff50442d3c65cf954ac428d83a34b6951bd632d4aff5243b9fdb9f413511" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.132830 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:42:00 crc kubenswrapper[5136]: E0320 08:42:00.153967 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerName="oc" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.154010 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerName="oc" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.154486 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" containerName="oc" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.155202 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.159235 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.159381 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.159434 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.166625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.234244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"auto-csr-approver-29566602-fmfs9\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.335573 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"auto-csr-approver-29566602-fmfs9\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.365045 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"auto-csr-approver-29566602-fmfs9\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.485376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:00 crc kubenswrapper[5136]: I0320 08:42:00.935865 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:42:01 crc kubenswrapper[5136]: I0320 08:42:01.935077 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" event={"ID":"78e36980-52e2-4a59-9374-b2f1150fcb20","Type":"ContainerStarted","Data":"bd20f8c76dd9d09f073435747b85590383d5c4e75188df4f3da12fea34405641"} Mar 20 08:42:02 crc kubenswrapper[5136]: I0320 08:42:02.949554 5136 generic.go:334] "Generic (PLEG): container finished" podID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerID="4d16220fc9b1db88fdb1fbb167050afb3f65c942a2a02caf4ba1ec80a2858ccc" exitCode=0 Mar 20 08:42:02 crc kubenswrapper[5136]: I0320 08:42:02.949656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" event={"ID":"78e36980-52e2-4a59-9374-b2f1150fcb20","Type":"ContainerDied","Data":"4d16220fc9b1db88fdb1fbb167050afb3f65c942a2a02caf4ba1ec80a2858ccc"} Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.329509 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.515426 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") pod \"78e36980-52e2-4a59-9374-b2f1150fcb20\" (UID: \"78e36980-52e2-4a59-9374-b2f1150fcb20\") " Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.522209 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq" (OuterVolumeSpecName: "kube-api-access-sq8zq") pod "78e36980-52e2-4a59-9374-b2f1150fcb20" (UID: "78e36980-52e2-4a59-9374-b2f1150fcb20"). InnerVolumeSpecName "kube-api-access-sq8zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.617197 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sq8zq\" (UniqueName: \"kubernetes.io/projected/78e36980-52e2-4a59-9374-b2f1150fcb20-kube-api-access-sq8zq\") on node \"crc\" DevicePath \"\"" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.966680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" event={"ID":"78e36980-52e2-4a59-9374-b2f1150fcb20","Type":"ContainerDied","Data":"bd20f8c76dd9d09f073435747b85590383d5c4e75188df4f3da12fea34405641"} Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.966718 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566602-fmfs9" Mar 20 08:42:04 crc kubenswrapper[5136]: I0320 08:42:04.966720 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd20f8c76dd9d09f073435747b85590383d5c4e75188df4f3da12fea34405641" Mar 20 08:42:05 crc kubenswrapper[5136]: I0320 08:42:05.428084 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:42:05 crc kubenswrapper[5136]: I0320 08:42:05.440445 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566596-npcd2"] Mar 20 08:42:06 crc kubenswrapper[5136]: I0320 08:42:06.406153 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57673048-5103-4b04-8ef3-777cb1a33601" path="/var/lib/kubelet/pods/57673048-5103-4b04-8ef3-777cb1a33601/volumes" Mar 20 08:42:15 crc kubenswrapper[5136]: I0320 08:42:15.822605 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:42:15 crc kubenswrapper[5136]: I0320 08:42:15.823593 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:42:38 crc kubenswrapper[5136]: I0320 08:42:38.268522 5136 scope.go:117] "RemoveContainer" containerID="ef67e23c79ff3a82593be1acaca432453adbd354cb50446c81640884957e8ffe" Mar 20 08:42:45 crc kubenswrapper[5136]: I0320 08:42:45.822077 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:42:45 crc kubenswrapper[5136]: I0320 08:42:45.822695 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.227210 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 08:43:05 crc kubenswrapper[5136]: E0320 08:43:05.228429 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerName="oc" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.228458 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerName="oc" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.228785 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" containerName="oc" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.230033 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.234060 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-k465q" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.235614 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.352201 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.352303 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.454171 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.454267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.459336 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.459395 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6f188fb104d832941e804d190179c78e8fbf49d372cf3c70e7b37a8db21f0157/globalmount\"" pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.478425 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.486611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"mariadb-copy-data\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " pod="openstack/mariadb-copy-data" Mar 20 08:43:05 crc kubenswrapper[5136]: I0320 08:43:05.553443 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.082261 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.446122 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerStarted","Data":"a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1"} Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.446197 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerStarted","Data":"2c2faf3df1acecb9c43fb4e3dfa1b1bce7305d443462043dbca7203ee15e6fb8"} Mar 20 08:43:06 crc kubenswrapper[5136]: I0320 08:43:06.461284 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mariadb-copy-data" podStartSLOduration=2.461266155 podStartE2EDuration="2.461266155s" podCreationTimestamp="2026-03-20 08:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:06.459467668 +0000 UTC m=+6818.718778819" watchObservedRunningTime="2026-03-20 08:43:06.461266155 +0000 UTC m=+6818.720577306" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.552315 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.554934 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.561092 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.739261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"mariadb-client\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.840561 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"mariadb-client\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.873353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"mariadb-client\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " pod="openstack/mariadb-client" Mar 20 08:43:10 crc kubenswrapper[5136]: I0320 08:43:10.885003 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.073658 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.080564 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-sbbcl"] Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.143395 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.492416 5136 generic.go:334] "Generic (PLEG): container finished" podID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerID="18976fde7b0e8720c3912ec558d2b411507101e156ae43c4e540472db0f27db1" exitCode=0 Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.492491 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62","Type":"ContainerDied","Data":"18976fde7b0e8720c3912ec558d2b411507101e156ae43c4e540472db0f27db1"} Mar 20 08:43:11 crc kubenswrapper[5136]: I0320 08:43:11.492544 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62","Type":"ContainerStarted","Data":"fbcb77051c8db7e2a986fe94d3dfcafb7ff812751abf891da7573c2b247b8f1d"} Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.406983 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a25720ab-064e-40ce-ae93-03dd9c33cf66" path="/var/lib/kubelet/pods/a25720ab-064e-40ce-ae93-03dd9c33cf66/volumes" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.750225 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.772244 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62/mariadb-client/0.log" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.801163 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.806832 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.874172 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") pod \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\" (UID: \"816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62\") " Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.878299 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf" (OuterVolumeSpecName: "kube-api-access-2l9kf") pod "816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" (UID: "816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62"). InnerVolumeSpecName "kube-api-access-2l9kf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.961797 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: E0320 08:43:12.962766 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerName="mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.962786 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerName="mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.963053 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" containerName="mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.963562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.968842 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:12 crc kubenswrapper[5136]: I0320 08:43:12.979569 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2l9kf\" (UniqueName: \"kubernetes.io/projected/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62-kube-api-access-2l9kf\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.081222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"mariadb-client\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.183100 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"mariadb-client\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.213656 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"mariadb-client\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.289199 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.509774 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbcb77051c8db7e2a986fe94d3dfcafb7ff812751abf891da7573c2b247b8f1d" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.510087 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.537865 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/mariadb-client" oldPodUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" Mar 20 08:43:13 crc kubenswrapper[5136]: I0320 08:43:13.557132 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:13 crc kubenswrapper[5136]: W0320 08:43:13.563085 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14707dd3_6d0b_4720_aeb1_f92f46c97812.slice/crio-7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a WatchSource:0}: Error finding container 7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a: Status 404 returned error can't find the container with id 7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.416026 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62" path="/var/lib/kubelet/pods/816e5f0b-1ec0-42d6-9dfc-6bb0797f0c62/volumes" Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.517800 5136 generic.go:334] "Generic (PLEG): container finished" podID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerID="c1c133ed294395cb16b8da83cdd09b7bc0d67e81462296562eb505d9f9f44e6f" exitCode=0 Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.517906 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"14707dd3-6d0b-4720-aeb1-f92f46c97812","Type":"ContainerDied","Data":"c1c133ed294395cb16b8da83cdd09b7bc0d67e81462296562eb505d9f9f44e6f"} Mar 20 08:43:14 crc kubenswrapper[5136]: I0320 08:43:14.517949 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-client" event={"ID":"14707dd3-6d0b-4720-aeb1-f92f46c97812","Type":"ContainerStarted","Data":"7d4cde7afbb7115163b0ec3aae84ceafb83bb9aa037af2310fec9dbcd1e2706a"} Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.821699 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822041 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822077 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822650 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.822694 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" gracePeriod=600 Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.874777 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.894331 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mariadb-client_14707dd3-6d0b-4720-aeb1-f92f46c97812/mariadb-client/0.log" Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.923171 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:15 crc kubenswrapper[5136]: I0320 08:43:15.929043 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-client"] Mar 20 08:43:15 crc kubenswrapper[5136]: E0320 08:43:15.952321 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.028355 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") pod \"14707dd3-6d0b-4720-aeb1-f92f46c97812\" (UID: \"14707dd3-6d0b-4720-aeb1-f92f46c97812\") " Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.039062 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2" (OuterVolumeSpecName: "kube-api-access-vn5j2") pod "14707dd3-6d0b-4720-aeb1-f92f46c97812" (UID: "14707dd3-6d0b-4720-aeb1-f92f46c97812"). InnerVolumeSpecName "kube-api-access-vn5j2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.130188 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vn5j2\" (UniqueName: \"kubernetes.io/projected/14707dd3-6d0b-4720-aeb1-f92f46c97812-kube-api-access-vn5j2\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.414277 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" path="/var/lib/kubelet/pods/14707dd3-6d0b-4720-aeb1-f92f46c97812/volumes" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.543711 5136 scope.go:117] "RemoveContainer" containerID="c1c133ed294395cb16b8da83cdd09b7bc0d67e81462296562eb505d9f9f44e6f" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.543922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-client" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.555259 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" exitCode=0 Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.555309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a"} Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.555784 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:16 crc kubenswrapper[5136]: E0320 08:43:16.556084 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:16 crc kubenswrapper[5136]: I0320 08:43:16.571706 5136 scope.go:117] "RemoveContainer" containerID="fd37942a39d253b6cfedd4ab695ecb8599827a8a29ad0a2c3607795c58193271" Mar 20 08:43:30 crc kubenswrapper[5136]: I0320 08:43:30.396343 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:30 crc kubenswrapper[5136]: E0320 08:43:30.397270 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:38 crc kubenswrapper[5136]: I0320 08:43:38.344178 5136 scope.go:117] "RemoveContainer" containerID="77614674db5f14222adee033ce4bf5c60259ff8d124c2f5a8301de91d769caa0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.302623 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 08:43:45 crc kubenswrapper[5136]: E0320 08:43:45.303544 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerName="mariadb-client" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.303561 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerName="mariadb-client" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.303763 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="14707dd3-6d0b-4720-aeb1-f92f46c97812" containerName="mariadb-client" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.304769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.306593 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.307231 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-sww5j" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.307511 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.307796 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.308941 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.317434 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.324497 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.325928 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.335498 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.337064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.360610 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.378005 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.397755 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:45 crc kubenswrapper[5136]: E0320 08:43:45.398024 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437023 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437063 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437089 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437146 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437167 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437184 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437209 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437364 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437475 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437546 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437602 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437624 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.437643 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538705 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538736 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538752 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538795 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538861 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538896 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538924 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.538947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539045 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539087 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539113 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539134 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539205 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539235 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539273 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539327 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.539786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540531 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540586 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540616 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.540907 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.545581 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.545851 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546179 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546351 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546390 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/55c96259ee5b46df72e29f8b4f8354d28810ccf4af15b8156942b05dcca234d3/globalmount\"" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546411 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.546441 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/496a4242cd77f4ae3e2362330edd572399213df1c8364c538f89c4da6118351c/globalmount\"" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.547483 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.551449 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.553471 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.563151 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.567096 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.580728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"ovsdbserver-nb-2\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.584369 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"ovsdbserver-nb-0\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.637044 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640683 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640749 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640811 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640885 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640910 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.640949 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.642034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.642296 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.642949 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.645211 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.645241 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1b8b1bec848a48f216534b795762c346ec36e6b88f5e71f6ea069d96e42de4bb/globalmount\"" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.645288 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.646454 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.646887 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.658964 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.661183 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.675888 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"ovsdbserver-nb-1\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:45 crc kubenswrapper[5136]: I0320 08:43:45.966593 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.204693 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.210037 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.260014 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.262831 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266519 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266647 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266704 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.266787 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-pn52r" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.269356 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.295999 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 08:43:46 crc kubenswrapper[5136]: W0320 08:43:46.321033 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2e6ed27_57ea_4ea9_9d66_e1088b5a07d4.slice/crio-3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234 WatchSource:0}: Error finding container 3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234: Status 404 returned error can't find the container with id 3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234 Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.323498 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.324125 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.328097 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350027 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350155 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350211 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350276 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.350520 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.361404 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.367375 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451801 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451852 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451930 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451960 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.451992 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452024 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452044 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452183 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452210 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452241 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452288 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452316 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452333 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452376 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452395 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452427 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.452719 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.453735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.455177 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458537 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458574 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/628e8694e94b6b991b58eb025a6326c93380697a5f5207dd738b0664b132a053/globalmount\"" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458829 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.458934 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.459079 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.473889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.476344 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.496033 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"ovsdbserver-sb-0\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553673 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553716 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553776 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553827 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553843 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553883 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553902 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553925 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553960 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.553979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554017 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554033 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554074 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.554132 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.555254 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.555675 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556002 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556484 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556553 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556575 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b506eb6aeafb6e888123d3ce737c799a4338b90b876c36cf088ffddcc411fa0a/globalmount\"" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.556605 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.557427 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.558965 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559527 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559556 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ef4f0d2c9cdb4aa595550fee76d7e40469fd109f31b60498ae55a6d92861ae4a/globalmount\"" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.559727 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.561220 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.563688 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.564301 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.589419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.591436 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.603678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"ovsdbserver-sb-1\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.605464 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.609940 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"ovsdbserver-sb-2\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.650522 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.658108 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.817657 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerStarted","Data":"5870d6b24a1657a079a89b9e9211d461a22b66269225de506dabd34bacc879f1"} Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.826273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerStarted","Data":"3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234"} Mar 20 08:43:46 crc kubenswrapper[5136]: I0320 08:43:46.830164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerStarted","Data":"e05bc317ee3c118e98b670a0ea0d818712ef16b644c13d4a02ef27c03d16c608"} Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.121152 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 08:43:47 crc kubenswrapper[5136]: W0320 08:43:47.127841 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda276ba4e_bbab_4a83_8fd2_d77573782aa6.slice/crio-56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23 WatchSource:0}: Error finding container 56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23: Status 404 returned error can't find the container with id 56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23 Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.222681 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 08:43:47 crc kubenswrapper[5136]: W0320 08:43:47.231037 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7e0c945f_6773_4bf8_872d_7eb5110de79f.slice/crio-f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71 WatchSource:0}: Error finding container f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71: Status 404 returned error can't find the container with id f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71 Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.837425 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerStarted","Data":"56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23"} Mar 20 08:43:47 crc kubenswrapper[5136]: I0320 08:43:47.839243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerStarted","Data":"f9e584984a933fdd738d890b942f2b5e0effc03a45b854e4fbf95b8869495e71"} Mar 20 08:43:48 crc kubenswrapper[5136]: I0320 08:43:48.478953 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 08:43:48 crc kubenswrapper[5136]: I0320 08:43:48.864881 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerStarted","Data":"e68e48705b4cdb3e57af6e933adb8006e7437ee5218f249bb6e11769fe0ee800"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.881063 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerStarted","Data":"8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.881653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerStarted","Data":"c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.889997 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerStarted","Data":"aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.890042 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerStarted","Data":"f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.893420 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerStarted","Data":"6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.893468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerStarted","Data":"50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.896104 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerStarted","Data":"adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.896221 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerStarted","Data":"4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.898941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerStarted","Data":"49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.901757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerStarted","Data":"2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.901796 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerStarted","Data":"6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a"} Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.909873 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-2" podStartSLOduration=4.082371376 podStartE2EDuration="5.909853298s" podCreationTimestamp="2026-03-20 08:43:45 +0000 UTC" firstStartedPulling="2026-03-20 08:43:48.487595391 +0000 UTC m=+6860.746906542" lastFinishedPulling="2026-03-20 08:43:50.315077313 +0000 UTC m=+6862.574388464" observedRunningTime="2026-03-20 08:43:50.908201756 +0000 UTC m=+6863.167512927" watchObservedRunningTime="2026-03-20 08:43:50.909853298 +0000 UTC m=+6863.169164449" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.936923 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=2.827295423 podStartE2EDuration="5.936902177s" podCreationTimestamp="2026-03-20 08:43:45 +0000 UTC" firstStartedPulling="2026-03-20 08:43:47.131467414 +0000 UTC m=+6859.390778565" lastFinishedPulling="2026-03-20 08:43:50.241074168 +0000 UTC m=+6862.500385319" observedRunningTime="2026-03-20 08:43:50.929613861 +0000 UTC m=+6863.188925022" watchObservedRunningTime="2026-03-20 08:43:50.936902177 +0000 UTC m=+6863.196213328" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.955576 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-2" podStartSLOduration=3.101116413 podStartE2EDuration="6.955558416s" podCreationTimestamp="2026-03-20 08:43:44 +0000 UTC" firstStartedPulling="2026-03-20 08:43:46.32978064 +0000 UTC m=+6858.589091791" lastFinishedPulling="2026-03-20 08:43:50.184222643 +0000 UTC m=+6862.443533794" observedRunningTime="2026-03-20 08:43:50.953868454 +0000 UTC m=+6863.213179595" watchObservedRunningTime="2026-03-20 08:43:50.955558416 +0000 UTC m=+6863.214869567" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.967586 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:50 crc kubenswrapper[5136]: I0320 08:43:50.973835 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-1" podStartSLOduration=3.277810595 podStartE2EDuration="6.973802612s" podCreationTimestamp="2026-03-20 08:43:44 +0000 UTC" firstStartedPulling="2026-03-20 08:43:46.487943607 +0000 UTC m=+6858.747254758" lastFinishedPulling="2026-03-20 08:43:50.183935624 +0000 UTC m=+6862.443246775" observedRunningTime="2026-03-20 08:43:50.970957414 +0000 UTC m=+6863.230268585" watchObservedRunningTime="2026-03-20 08:43:50.973802612 +0000 UTC m=+6863.233113763" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.002295 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.991118539 podStartE2EDuration="7.002279736s" podCreationTimestamp="2026-03-20 08:43:44 +0000 UTC" firstStartedPulling="2026-03-20 08:43:46.209760575 +0000 UTC m=+6858.469071726" lastFinishedPulling="2026-03-20 08:43:50.220921772 +0000 UTC m=+6862.480232923" observedRunningTime="2026-03-20 08:43:50.994448082 +0000 UTC m=+6863.253759233" watchObservedRunningTime="2026-03-20 08:43:51.002279736 +0000 UTC m=+6863.261590887" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.592311 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.638092 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.659787 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.659919 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.933561 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerStarted","Data":"ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619"} Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.957864 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-1" podStartSLOduration=3.483118071 podStartE2EDuration="6.957846544s" podCreationTimestamp="2026-03-20 08:43:45 +0000 UTC" firstStartedPulling="2026-03-20 08:43:47.233767598 +0000 UTC m=+6859.493078749" lastFinishedPulling="2026-03-20 08:43:50.708496071 +0000 UTC m=+6862.967807222" observedRunningTime="2026-03-20 08:43:51.952125917 +0000 UTC m=+6864.211437078" watchObservedRunningTime="2026-03-20 08:43:51.957846544 +0000 UTC m=+6864.217157695" Mar 20 08:43:51 crc kubenswrapper[5136]: I0320 08:43:51.967123 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:52 crc kubenswrapper[5136]: I0320 08:43:52.593272 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:52 crc kubenswrapper[5136]: I0320 08:43:52.651613 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:52 crc kubenswrapper[5136]: I0320 08:43:52.658927 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.676886 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.677500 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.701244 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:54 crc kubenswrapper[5136]: I0320 08:43:54.702185 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.001162 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.040339 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.311416 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.315405 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.364444 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.375404 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414025 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414388 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.414561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516646 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516793 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.516832 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.517648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.518401 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.518902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.534443 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"dnsmasq-dns-559cd67f5f-bzx2s\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.637598 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.682424 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.695353 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.703570 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.712625 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.713653 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.721060 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.730285 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.803557 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-2" Mar 20 08:43:55 crc kubenswrapper[5136]: I0320 08:43:55.841158 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-1" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.201615 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.226015 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.227873 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.230172 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.241076 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.273530 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334182 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334355 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334502 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334606 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.334688 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.436877 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.436955 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437017 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.437919 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.438397 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.439282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.440133 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.460597 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"dnsmasq-dns-57d68955ff-c2m5x\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.559905 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.980315 5136 generic.go:334] "Generic (PLEG): container finished" podID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerID="8cc817dfc7c497e6a23462bb83921054c2b5f523e401e310e81939eba77ad619" exitCode=0 Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.980387 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" event={"ID":"0eedd685-b07d-42b2-b7d7-94d10fbb7500","Type":"ContainerDied","Data":"8cc817dfc7c497e6a23462bb83921054c2b5f523e401e310e81939eba77ad619"} Mar 20 08:43:56 crc kubenswrapper[5136]: I0320 08:43:56.980808 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" event={"ID":"0eedd685-b07d-42b2-b7d7-94d10fbb7500","Type":"ContainerStarted","Data":"caff4e6f120f85a12ee92017b89baedc55dfa14fc848a2243eb2f5949805f777"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.051430 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:43:57 crc kubenswrapper[5136]: W0320 08:43:57.061601 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80ba4d56_2bee_4ab9_9acd_c7588d675a4b.slice/crio-167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091 WatchSource:0}: Error finding container 167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091: Status 404 returned error can't find the container with id 167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091 Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.246622 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357362 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357425 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357641 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.357668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") pod \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\" (UID: \"0eedd685-b07d-42b2-b7d7-94d10fbb7500\") " Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.361684 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l" (OuterVolumeSpecName: "kube-api-access-pgg4l") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "kube-api-access-pgg4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.376041 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.377496 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config" (OuterVolumeSpecName: "config") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.388521 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "0eedd685-b07d-42b2-b7d7-94d10fbb7500" (UID: "0eedd685-b07d-42b2-b7d7-94d10fbb7500"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.397237 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:43:57 crc kubenswrapper[5136]: E0320 08:43:57.398010 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459736 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459772 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459785 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgg4l\" (UniqueName: \"kubernetes.io/projected/0eedd685-b07d-42b2-b7d7-94d10fbb7500-kube-api-access-pgg4l\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.459798 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/0eedd685-b07d-42b2-b7d7-94d10fbb7500-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.991988 5136 generic.go:334] "Generic (PLEG): container finished" podID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerID="e3d5db568a0a051b325af6f8c22b6c105820123adce2d9ee29ab549861506fd4" exitCode=0 Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.992074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerDied","Data":"e3d5db568a0a051b325af6f8c22b6c105820123adce2d9ee29ab549861506fd4"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.992102 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerStarted","Data":"167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.996793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" event={"ID":"0eedd685-b07d-42b2-b7d7-94d10fbb7500","Type":"ContainerDied","Data":"caff4e6f120f85a12ee92017b89baedc55dfa14fc848a2243eb2f5949805f777"} Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.996866 5136 scope.go:117] "RemoveContainer" containerID="8cc817dfc7c497e6a23462bb83921054c2b5f523e401e310e81939eba77ad619" Mar 20 08:43:57 crc kubenswrapper[5136]: I0320 08:43:57.996944 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-559cd67f5f-bzx2s" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.212126 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.241909 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-559cd67f5f-bzx2s"] Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.407736 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" path="/var/lib/kubelet/pods/0eedd685-b07d-42b2-b7d7-94d10fbb7500/volumes" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.800845 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-copy-data"] Mar 20 08:43:58 crc kubenswrapper[5136]: E0320 08:43:58.801167 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerName="init" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.801183 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerName="init" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.801341 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0eedd685-b07d-42b2-b7d7-94d10fbb7500" containerName="init" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.801882 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.804028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovn-data-cert" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.808372 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.988922 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.988986 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:58 crc kubenswrapper[5136]: I0320 08:43:58.989350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.006036 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerStarted","Data":"fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132"} Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.006185 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.022525 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" podStartSLOduration=3.022506944 podStartE2EDuration="3.022506944s" podCreationTimestamp="2026-03-20 08:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:43:59.021327387 +0000 UTC m=+6871.280638538" watchObservedRunningTime="2026-03-20 08:43:59.022506944 +0000 UTC m=+6871.281818095" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.091058 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.091184 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.091230 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.095064 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.095113 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f22aae1ccf05e63f6579bb99a16fd344875c34039a4f43ed5d40a64cbfffb0e7/globalmount\"" pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.095546 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.108707 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.124268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"ovn-copy-data\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.426252 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 08:43:59 crc kubenswrapper[5136]: I0320 08:43:59.964324 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.017196 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerStarted","Data":"e581eb3896caa8dce4da5d70ae2539c97df467c09420153c45b9ba77109b2e63"} Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.133103 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.134452 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.137199 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.137788 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.137984 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.142727 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.314682 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"auto-csr-approver-29566604-xrrdd\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.417189 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"auto-csr-approver-29566604-xrrdd\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.437959 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"auto-csr-approver-29566604-xrrdd\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.455140 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:00 crc kubenswrapper[5136]: I0320 08:44:00.855731 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:44:00 crc kubenswrapper[5136]: W0320 08:44:00.865227 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod372179a0_537a_4126_97c1_2d6a045e8798.slice/crio-9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd WatchSource:0}: Error finding container 9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd: Status 404 returned error can't find the container with id 9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd Mar 20 08:44:01 crc kubenswrapper[5136]: I0320 08:44:01.025981 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerStarted","Data":"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386"} Mar 20 08:44:01 crc kubenswrapper[5136]: I0320 08:44:01.027843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" event={"ID":"372179a0-537a-4126-97c1-2d6a045e8798","Type":"ContainerStarted","Data":"9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd"} Mar 20 08:44:01 crc kubenswrapper[5136]: I0320 08:44:01.042226 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-copy-data" podStartSLOduration=3.858317195 podStartE2EDuration="4.04220654s" podCreationTimestamp="2026-03-20 08:43:57 +0000 UTC" firstStartedPulling="2026-03-20 08:43:59.969002412 +0000 UTC m=+6872.228313563" lastFinishedPulling="2026-03-20 08:44:00.152891757 +0000 UTC m=+6872.412202908" observedRunningTime="2026-03-20 08:44:01.037095501 +0000 UTC m=+6873.296406672" watchObservedRunningTime="2026-03-20 08:44:01.04220654 +0000 UTC m=+6873.301517691" Mar 20 08:44:03 crc kubenswrapper[5136]: I0320 08:44:03.047234 5136 generic.go:334] "Generic (PLEG): container finished" podID="372179a0-537a-4126-97c1-2d6a045e8798" containerID="a7d9dee7dfd341c20d54bcc9a10648dd04c5eaeec50a978661f3c530263c499e" exitCode=0 Mar 20 08:44:03 crc kubenswrapper[5136]: I0320 08:44:03.047280 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" event={"ID":"372179a0-537a-4126-97c1-2d6a045e8798","Type":"ContainerDied","Data":"a7d9dee7dfd341c20d54bcc9a10648dd04c5eaeec50a978661f3c530263c499e"} Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.420946 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.590641 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") pod \"372179a0-537a-4126-97c1-2d6a045e8798\" (UID: \"372179a0-537a-4126-97c1-2d6a045e8798\") " Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.599167 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm" (OuterVolumeSpecName: "kube-api-access-sqlbm") pod "372179a0-537a-4126-97c1-2d6a045e8798" (UID: "372179a0-537a-4126-97c1-2d6a045e8798"). InnerVolumeSpecName "kube-api-access-sqlbm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:04 crc kubenswrapper[5136]: I0320 08:44:04.692535 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqlbm\" (UniqueName: \"kubernetes.io/projected/372179a0-537a-4126-97c1-2d6a045e8798-kube-api-access-sqlbm\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.067251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" event={"ID":"372179a0-537a-4126-97c1-2d6a045e8798","Type":"ContainerDied","Data":"9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd"} Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.067289 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d1640700b6eb174982a1eb1fab80d8b5460dd14040a59516a0348ac05eb2acd" Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.067304 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566604-xrrdd" Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.494905 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:44:05 crc kubenswrapper[5136]: I0320 08:44:05.501414 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566598-9zdgt"] Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.404434 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9379207f-99bf-4561-8979-f27be8f510ac" path="/var/lib/kubelet/pods/9379207f-99bf-4561-8979-f27be8f510ac/volumes" Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.561550 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.631134 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:44:06 crc kubenswrapper[5136]: I0320 08:44:06.631392 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" containerID="cri-o://8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673" gracePeriod=10 Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085711 5136 generic.go:334] "Generic (PLEG): container finished" podID="e61df6ca-2419-400a-8790-9695f75c6d92" containerID="8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673" exitCode=0 Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085760 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerDied","Data":"8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673"} Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" event={"ID":"e61df6ca-2419-400a-8790-9695f75c6d92","Type":"ContainerDied","Data":"5f995642f784fe24cc982d1a64669bee0b35f9981d3b63db2aa6b8236cd2ea18"} Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.085798 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f995642f784fe24cc982d1a64669bee0b35f9981d3b63db2aa6b8236cd2ea18" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.094568 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.246141 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") pod \"e61df6ca-2419-400a-8790-9695f75c6d92\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.246617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") pod \"e61df6ca-2419-400a-8790-9695f75c6d92\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.247283 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") pod \"e61df6ca-2419-400a-8790-9695f75c6d92\" (UID: \"e61df6ca-2419-400a-8790-9695f75c6d92\") " Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.255619 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65" (OuterVolumeSpecName: "kube-api-access-kxv65") pod "e61df6ca-2419-400a-8790-9695f75c6d92" (UID: "e61df6ca-2419-400a-8790-9695f75c6d92"). InnerVolumeSpecName "kube-api-access-kxv65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.295140 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e61df6ca-2419-400a-8790-9695f75c6d92" (UID: "e61df6ca-2419-400a-8790-9695f75c6d92"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.296036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config" (OuterVolumeSpecName: "config") pod "e61df6ca-2419-400a-8790-9695f75c6d92" (UID: "e61df6ca-2419-400a-8790-9695f75c6d92"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.348973 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.349015 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxv65\" (UniqueName: \"kubernetes.io/projected/e61df6ca-2419-400a-8790-9695f75c6d92-kube-api-access-kxv65\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:07 crc kubenswrapper[5136]: I0320 08:44:07.349028 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e61df6ca-2419-400a-8790-9695f75c6d92-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.093723 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b5c84b9cc-cxrps" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.144149 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.152759 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b5c84b9cc-cxrps"] Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.400694 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.401277 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.405801 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" path="/var/lib/kubelet/pods/e61df6ca-2419-400a-8790-9695f75c6d92/volumes" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.855728 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.856556 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="init" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.856577 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="init" Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.856594 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.856600 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" Mar 20 08:44:08 crc kubenswrapper[5136]: E0320 08:44:08.856616 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="372179a0-537a-4126-97c1-2d6a045e8798" containerName="oc" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.856624 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="372179a0-537a-4126-97c1-2d6a045e8798" containerName="oc" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.857009 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61df6ca-2419-400a-8790-9695f75c6d92" containerName="dnsmasq-dns" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.857033 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="372179a0-537a-4126-97c1-2d6a045e8798" containerName="oc" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.858872 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.871592 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.871901 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.871897 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-tk76c" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.872669 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.903987 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.975984 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976080 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976106 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976183 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976204 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:08 crc kubenswrapper[5136]: I0320 08:44:08.976238 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078488 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078615 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078714 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078797 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.078939 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.080106 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.080366 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.081139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.085697 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.085710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.091589 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.096627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"ovn-northd-0\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.202136 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 08:44:09 crc kubenswrapper[5136]: I0320 08:44:09.707338 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 08:44:09 crc kubenswrapper[5136]: W0320 08:44:09.718724 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22659681_bc2b_4056_81d6_96b046e45712.slice/crio-967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2 WatchSource:0}: Error finding container 967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2: Status 404 returned error can't find the container with id 967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2 Mar 20 08:44:10 crc kubenswrapper[5136]: I0320 08:44:10.109958 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerStarted","Data":"967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2"} Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.119546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerStarted","Data":"43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74"} Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.119905 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerStarted","Data":"491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d"} Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.119930 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Mar 20 08:44:11 crc kubenswrapper[5136]: I0320 08:44:11.144665 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.436347217 podStartE2EDuration="3.144616713s" podCreationTimestamp="2026-03-20 08:44:08 +0000 UTC" firstStartedPulling="2026-03-20 08:44:09.722577911 +0000 UTC m=+6881.981889062" lastFinishedPulling="2026-03-20 08:44:10.430847407 +0000 UTC m=+6882.690158558" observedRunningTime="2026-03-20 08:44:11.139793064 +0000 UTC m=+6883.399104215" watchObservedRunningTime="2026-03-20 08:44:11.144616713 +0000 UTC m=+6883.403927874" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.515309 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.517233 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.525074 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.526236 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.527603 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.533268 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.540665 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619171 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619224 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.619330 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.738116 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.738755 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.738870 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.739041 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.740222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.740311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.758276 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"keystone-3614-account-create-update-dp5t6\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.758366 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"keystone-db-create-blnd4\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.837936 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:16 crc kubenswrapper[5136]: I0320 08:44:16.863247 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:17 crc kubenswrapper[5136]: I0320 08:44:17.300168 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:44:17 crc kubenswrapper[5136]: I0320 08:44:17.332965 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:44:17 crc kubenswrapper[5136]: W0320 08:44:17.338651 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fb13f3a_3785_4650_8381_e4d5e6fa7f73.slice/crio-7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068 WatchSource:0}: Error finding container 7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068: Status 404 returned error can't find the container with id 7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068 Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.195055 5136 generic.go:334] "Generic (PLEG): container finished" podID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerID="ae45294b801e93d47563db9ba4054a170a4f53699928ebbc069e3e19b4610e4f" exitCode=0 Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.195146 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3614-account-create-update-dp5t6" event={"ID":"0fb13f3a-3785-4650-8381-e4d5e6fa7f73","Type":"ContainerDied","Data":"ae45294b801e93d47563db9ba4054a170a4f53699928ebbc069e3e19b4610e4f"} Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.195309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3614-account-create-update-dp5t6" event={"ID":"0fb13f3a-3785-4650-8381-e4d5e6fa7f73","Type":"ContainerStarted","Data":"7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068"} Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.196771 5136 generic.go:334] "Generic (PLEG): container finished" podID="0749652f-3995-4e34-ba17-55eac4c3530c" containerID="943e6011fb2bb8f85aa7e1232523d7da6d707090421691ca85ab0e7998c29b98" exitCode=0 Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.196803 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-blnd4" event={"ID":"0749652f-3995-4e34-ba17-55eac4c3530c","Type":"ContainerDied","Data":"943e6011fb2bb8f85aa7e1232523d7da6d707090421691ca85ab0e7998c29b98"} Mar 20 08:44:18 crc kubenswrapper[5136]: I0320 08:44:18.196841 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-blnd4" event={"ID":"0749652f-3995-4e34-ba17-55eac4c3530c","Type":"ContainerStarted","Data":"2ed2716e588c929d3d5d3b916211215f75c49b755000a58dfb712d9c8fa0d264"} Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.555800 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.566158 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.606607 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") pod \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.606727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") pod \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\" (UID: \"0fb13f3a-3785-4650-8381-e4d5e6fa7f73\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.606758 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") pod \"0749652f-3995-4e34-ba17-55eac4c3530c\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.607005 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") pod \"0749652f-3995-4e34-ba17-55eac4c3530c\" (UID: \"0749652f-3995-4e34-ba17-55eac4c3530c\") " Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.607587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fb13f3a-3785-4650-8381-e4d5e6fa7f73" (UID: "0fb13f3a-3785-4650-8381-e4d5e6fa7f73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.607786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0749652f-3995-4e34-ba17-55eac4c3530c" (UID: "0749652f-3995-4e34-ba17-55eac4c3530c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.613302 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt" (OuterVolumeSpecName: "kube-api-access-4pwxt") pod "0749652f-3995-4e34-ba17-55eac4c3530c" (UID: "0749652f-3995-4e34-ba17-55eac4c3530c"). InnerVolumeSpecName "kube-api-access-4pwxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.613369 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr" (OuterVolumeSpecName: "kube-api-access-456dr") pod "0fb13f3a-3785-4650-8381-e4d5e6fa7f73" (UID: "0fb13f3a-3785-4650-8381-e4d5e6fa7f73"). InnerVolumeSpecName "kube-api-access-456dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709413 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-456dr\" (UniqueName: \"kubernetes.io/projected/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-kube-api-access-456dr\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709452 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pwxt\" (UniqueName: \"kubernetes.io/projected/0749652f-3995-4e34-ba17-55eac4c3530c-kube-api-access-4pwxt\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709466 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0749652f-3995-4e34-ba17-55eac4c3530c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:19 crc kubenswrapper[5136]: I0320 08:44:19.709493 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fb13f3a-3785-4650-8381-e4d5e6fa7f73-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.228199 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-blnd4" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.228200 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-blnd4" event={"ID":"0749652f-3995-4e34-ba17-55eac4c3530c","Type":"ContainerDied","Data":"2ed2716e588c929d3d5d3b916211215f75c49b755000a58dfb712d9c8fa0d264"} Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.228329 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ed2716e588c929d3d5d3b916211215f75c49b755000a58dfb712d9c8fa0d264" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.230410 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-3614-account-create-update-dp5t6" event={"ID":"0fb13f3a-3785-4650-8381-e4d5e6fa7f73","Type":"ContainerDied","Data":"7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068"} Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.230448 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-dp5t6" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.230449 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e9bc12f90cc6a37b11e496a52c7be74ae88cbc41b583cfc8564a8d84beb7068" Mar 20 08:44:20 crc kubenswrapper[5136]: I0320 08:44:20.397582 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:20 crc kubenswrapper[5136]: E0320 08:44:20.398324 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.983572 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:44:21 crc kubenswrapper[5136]: E0320 08:44:21.983975 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerName="mariadb-account-create-update" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.983994 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerName="mariadb-account-create-update" Mar 20 08:44:21 crc kubenswrapper[5136]: E0320 08:44:21.984014 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" containerName="mariadb-database-create" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984021 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" containerName="mariadb-database-create" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984175 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" containerName="mariadb-database-create" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984188 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" containerName="mariadb-account-create-update" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.984691 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.986505 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.988467 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.989893 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.990202 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:21 crc kubenswrapper[5136]: I0320 08:44:21.999433 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.062748 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.062913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.062955 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.165670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.165781 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.165889 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.176730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.181421 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.208457 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"keystone-db-sync-62shw\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.368797 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:22 crc kubenswrapper[5136]: I0320 08:44:22.792384 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:44:23 crc kubenswrapper[5136]: I0320 08:44:23.253363 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerStarted","Data":"549f9ebb1b138869c8af30c58ac84b76e50c7d4cdb473ff81b9c92aa5b441e01"} Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.289257 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerStarted","Data":"ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5"} Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.314334 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-62shw" podStartSLOduration=2.467345602 podStartE2EDuration="7.31430709s" podCreationTimestamp="2026-03-20 08:44:21 +0000 UTC" firstStartedPulling="2026-03-20 08:44:22.7992089 +0000 UTC m=+6895.058520081" lastFinishedPulling="2026-03-20 08:44:27.646170408 +0000 UTC m=+6899.905481569" observedRunningTime="2026-03-20 08:44:28.312242496 +0000 UTC m=+6900.571553657" watchObservedRunningTime="2026-03-20 08:44:28.31430709 +0000 UTC m=+6900.573618271" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.677802 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.681454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.691416 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.777482 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.777550 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.777594 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.878584 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.878670 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.878731 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.879402 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.880075 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:28 crc kubenswrapper[5136]: I0320 08:44:28.912108 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"redhat-operators-2kvt9\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:29 crc kubenswrapper[5136]: I0320 08:44:29.005305 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:29 crc kubenswrapper[5136]: I0320 08:44:29.278292 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Mar 20 08:44:29 crc kubenswrapper[5136]: W0320 08:44:29.451379 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37fd264e_9020_4030_9f75_946d4f31cab0.slice/crio-3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7 WatchSource:0}: Error finding container 3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7: Status 404 returned error can't find the container with id 3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7 Mar 20 08:44:29 crc kubenswrapper[5136]: I0320 08:44:29.463674 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.305462 5136 generic.go:334] "Generic (PLEG): container finished" podID="37fd264e-9020-4030-9f75-946d4f31cab0" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" exitCode=0 Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.305550 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d"} Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.305598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerStarted","Data":"3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7"} Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.306910 5136 generic.go:334] "Generic (PLEG): container finished" podID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerID="ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5" exitCode=0 Mar 20 08:44:30 crc kubenswrapper[5136]: I0320 08:44:30.306946 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerDied","Data":"ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5"} Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.316294 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerStarted","Data":"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4"} Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.645332 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.721076 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") pod \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.721168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") pod \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.721317 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") pod \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\" (UID: \"21e9b60d-f307-406d-9085-fbd9d8b67cf5\") " Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.725921 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv" (OuterVolumeSpecName: "kube-api-access-fwsfv") pod "21e9b60d-f307-406d-9085-fbd9d8b67cf5" (UID: "21e9b60d-f307-406d-9085-fbd9d8b67cf5"). InnerVolumeSpecName "kube-api-access-fwsfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.746891 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21e9b60d-f307-406d-9085-fbd9d8b67cf5" (UID: "21e9b60d-f307-406d-9085-fbd9d8b67cf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.777279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data" (OuterVolumeSpecName: "config-data") pod "21e9b60d-f307-406d-9085-fbd9d8b67cf5" (UID: "21e9b60d-f307-406d-9085-fbd9d8b67cf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.823352 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwsfv\" (UniqueName: \"kubernetes.io/projected/21e9b60d-f307-406d-9085-fbd9d8b67cf5-kube-api-access-fwsfv\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.823390 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:31 crc kubenswrapper[5136]: I0320 08:44:31.823399 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21e9b60d-f307-406d-9085-fbd9d8b67cf5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.331654 5136 generic.go:334] "Generic (PLEG): container finished" podID="37fd264e-9020-4030-9f75-946d4f31cab0" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" exitCode=0 Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.331760 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4"} Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.339271 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-62shw" event={"ID":"21e9b60d-f307-406d-9085-fbd9d8b67cf5","Type":"ContainerDied","Data":"549f9ebb1b138869c8af30c58ac84b76e50c7d4cdb473ff81b9c92aa5b441e01"} Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.339313 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="549f9ebb1b138869c8af30c58ac84b76e50c7d4cdb473ff81b9c92aa5b441e01" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.339356 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-62shw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.579318 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:44:32 crc kubenswrapper[5136]: E0320 08:44:32.579756 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerName="keystone-db-sync" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.579781 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerName="keystone-db-sync" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.579976 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" containerName="keystone-db-sync" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.580997 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.600594 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636255 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636327 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636359 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.636493 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.642299 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.643346 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647474 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647529 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647711 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.647804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.652291 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.662191 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738158 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738181 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738217 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738243 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738274 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738356 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738378 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738416 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.738448 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.739459 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.740191 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.740792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.741565 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.775988 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"dnsmasq-dns-777959d579-j5npb\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.839903 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.839941 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.839983 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.840022 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.840049 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.840064 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.844018 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.844182 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.844925 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.845370 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.854356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.856050 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"keystone-bootstrap-l4bpw\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.899278 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:32 crc kubenswrapper[5136]: I0320 08:44:32.969055 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.346698 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.348572 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerStarted","Data":"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7"} Mar 20 08:44:33 crc kubenswrapper[5136]: W0320 08:44:33.354131 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd5e6126_8bb0_497c_9a3a_856e96128e83.slice/crio-148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b WatchSource:0}: Error finding container 148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b: Status 404 returned error can't find the container with id 148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.369469 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2kvt9" podStartSLOduration=2.884124474 podStartE2EDuration="5.369445717s" podCreationTimestamp="2026-03-20 08:44:28 +0000 UTC" firstStartedPulling="2026-03-20 08:44:30.307831154 +0000 UTC m=+6902.567142305" lastFinishedPulling="2026-03-20 08:44:32.793152397 +0000 UTC m=+6905.052463548" observedRunningTime="2026-03-20 08:44:33.365102683 +0000 UTC m=+6905.624413834" watchObservedRunningTime="2026-03-20 08:44:33.369445717 +0000 UTC m=+6905.628756868" Mar 20 08:44:33 crc kubenswrapper[5136]: I0320 08:44:33.485107 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.366264 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerStarted","Data":"c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.366588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerStarted","Data":"3239bd249ccd137b449e7ffafd6142d9ad57319034a579ce14aaaed982e8bcdb"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.373014 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" exitCode=0 Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.373071 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerDied","Data":"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.373147 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerStarted","Data":"148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b"} Mar 20 08:44:34 crc kubenswrapper[5136]: I0320 08:44:34.385096 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-l4bpw" podStartSLOduration=2.385052279 podStartE2EDuration="2.385052279s" podCreationTimestamp="2026-03-20 08:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:34.378984532 +0000 UTC m=+6906.638295683" watchObservedRunningTime="2026-03-20 08:44:34.385052279 +0000 UTC m=+6906.644363440" Mar 20 08:44:35 crc kubenswrapper[5136]: I0320 08:44:35.385899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerStarted","Data":"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288"} Mar 20 08:44:35 crc kubenswrapper[5136]: I0320 08:44:35.396564 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:35 crc kubenswrapper[5136]: E0320 08:44:35.396776 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:35 crc kubenswrapper[5136]: I0320 08:44:35.423643 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-777959d579-j5npb" podStartSLOduration=3.423625973 podStartE2EDuration="3.423625973s" podCreationTimestamp="2026-03-20 08:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:35.420344682 +0000 UTC m=+6907.679655833" watchObservedRunningTime="2026-03-20 08:44:35.423625973 +0000 UTC m=+6907.682937124" Mar 20 08:44:36 crc kubenswrapper[5136]: I0320 08:44:36.391642 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.411623 5136 generic.go:334] "Generic (PLEG): container finished" podID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerID="c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a" exitCode=0 Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.418631 5136 scope.go:117] "RemoveContainer" containerID="e29edc0f4375ac391060cb753f50bdb9915298f531076d0c17e85a24815a777f" Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.419573 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerDied","Data":"c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a"} Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.478364 5136 scope.go:117] "RemoveContainer" containerID="8c3b72a05088d18b83e8fd4c523a6250996da641765be00333d607a1dfe71673" Mar 20 08:44:38 crc kubenswrapper[5136]: I0320 08:44:38.493405 5136 scope.go:117] "RemoveContainer" containerID="02dd795cb150362efe906bc099f470a71a335d9458efc922a23eb6c04569901e" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.006361 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.006649 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.753599 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783764 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783846 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.783887 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.784035 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") pod \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\" (UID: \"bf9fd65f-edc7-45c1-9503-1eb4386d5f38\") " Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.789576 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts" (OuterVolumeSpecName: "scripts") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.789946 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.790621 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm" (OuterVolumeSpecName: "kube-api-access-zx5jm") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "kube-api-access-zx5jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.791297 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.807032 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.810734 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data" (OuterVolumeSpecName: "config-data") pod "bf9fd65f-edc7-45c1-9503-1eb4386d5f38" (UID: "bf9fd65f-edc7-45c1-9503-1eb4386d5f38"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886287 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zx5jm\" (UniqueName: \"kubernetes.io/projected/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-kube-api-access-zx5jm\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886323 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886333 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886343 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886352 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:39 crc kubenswrapper[5136]: I0320 08:44:39.886359 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bf9fd65f-edc7-45c1-9503-1eb4386d5f38-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.052385 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2kvt9" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" probeResult="failure" output=< Mar 20 08:44:40 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:44:40 crc kubenswrapper[5136]: > Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.450024 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-l4bpw" event={"ID":"bf9fd65f-edc7-45c1-9503-1eb4386d5f38","Type":"ContainerDied","Data":"3239bd249ccd137b449e7ffafd6142d9ad57319034a579ce14aaaed982e8bcdb"} Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.450067 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3239bd249ccd137b449e7ffafd6142d9ad57319034a579ce14aaaed982e8bcdb" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.450252 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-l4bpw" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.506170 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.513280 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-l4bpw"] Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.603978 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:44:40 crc kubenswrapper[5136]: E0320 08:44:40.604425 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerName="keystone-bootstrap" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.604452 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerName="keystone-bootstrap" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.604686 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" containerName="keystone-bootstrap" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.605349 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610370 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610599 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610667 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610697 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.610914 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.630853 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.698700 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699062 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699287 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.699306 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800628 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800742 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.800761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.807480 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.808088 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.809139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.809222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.812312 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.830641 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"keystone-bootstrap-645md\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:40 crc kubenswrapper[5136]: I0320 08:44:40.938282 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:41 crc kubenswrapper[5136]: I0320 08:44:41.406477 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:44:41 crc kubenswrapper[5136]: W0320 08:44:41.410876 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode023c878_7ddf_478a_9069_85d32b1d5bf9.slice/crio-cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87 WatchSource:0}: Error finding container cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87: Status 404 returned error can't find the container with id cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87 Mar 20 08:44:41 crc kubenswrapper[5136]: I0320 08:44:41.464258 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerStarted","Data":"cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87"} Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.411027 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf9fd65f-edc7-45c1-9503-1eb4386d5f38" path="/var/lib/kubelet/pods/bf9fd65f-edc7-45c1-9503-1eb4386d5f38/volumes" Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.472326 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerStarted","Data":"7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f"} Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.493259 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-645md" podStartSLOduration=2.493238616 podStartE2EDuration="2.493238616s" podCreationTimestamp="2026-03-20 08:44:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:42.486210268 +0000 UTC m=+6914.745521429" watchObservedRunningTime="2026-03-20 08:44:42.493238616 +0000 UTC m=+6914.752549767" Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.901139 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.962111 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:44:42 crc kubenswrapper[5136]: I0320 08:44:42.962345 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" containerID="cri-o://fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132" gracePeriod=10 Mar 20 08:44:43 crc kubenswrapper[5136]: I0320 08:44:43.482596 5136 generic.go:334] "Generic (PLEG): container finished" podID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerID="fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132" exitCode=0 Mar 20 08:44:43 crc kubenswrapper[5136]: I0320 08:44:43.482746 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerDied","Data":"fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132"} Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.176283 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370106 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370227 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.370920 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") pod \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\" (UID: \"80ba4d56-2bee-4ab9-9acd-c7588d675a4b\") " Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.385064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j" (OuterVolumeSpecName: "kube-api-access-snk2j") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "kube-api-access-snk2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.473580 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.473663 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snk2j\" (UniqueName: \"kubernetes.io/projected/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-kube-api-access-snk2j\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.477186 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.486314 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.497437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.502220 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config" (OuterVolumeSpecName: "config") pod "80ba4d56-2bee-4ab9-9acd-c7588d675a4b" (UID: "80ba4d56-2bee-4ab9-9acd-c7588d675a4b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.566966 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d68955ff-c2m5x" event={"ID":"80ba4d56-2bee-4ab9-9acd-c7588d675a4b","Type":"ContainerDied","Data":"167230fdb5149a52edd1822e02c35390c6ca18f21c97fbb6e377a5b71350b091"} Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.567031 5136 scope.go:117] "RemoveContainer" containerID="fa96e658106264d429d004b45b4bacbef4dfbc220f9941975579c5341fa77132" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574920 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574961 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574973 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.574983 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80ba4d56-2bee-4ab9-9acd-c7588d675a4b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.638418 5136 scope.go:117] "RemoveContainer" containerID="e3d5db568a0a051b325af6f8c22b6c105820123adce2d9ee29ab549861506fd4" Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.830161 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:44:44 crc kubenswrapper[5136]: I0320 08:44:44.837316 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d68955ff-c2m5x"] Mar 20 08:44:45 crc kubenswrapper[5136]: I0320 08:44:45.508856 5136 generic.go:334] "Generic (PLEG): container finished" podID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerID="7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f" exitCode=0 Mar 20 08:44:45 crc kubenswrapper[5136]: I0320 08:44:45.508923 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerDied","Data":"7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f"} Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.407379 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" path="/var/lib/kubelet/pods/80ba4d56-2bee-4ab9-9acd-c7588d675a4b/volumes" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.803190 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809378 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809546 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.809624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.815004 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.820341 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.825127 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85" (OuterVolumeSpecName: "kube-api-access-9xh85") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "kube-api-access-9xh85". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.837404 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data" (OuterVolumeSpecName: "config-data") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.911884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.911965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") pod \"e023c878-7ddf-478a-9069-85d32b1d5bf9\" (UID: \"e023c878-7ddf-478a-9069-85d32b1d5bf9\") " Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912495 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912518 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xh85\" (UniqueName: \"kubernetes.io/projected/e023c878-7ddf-478a-9069-85d32b1d5bf9-kube-api-access-9xh85\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912528 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.912537 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.918987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts" (OuterVolumeSpecName: "scripts") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:46 crc kubenswrapper[5136]: I0320 08:44:46.934468 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e023c878-7ddf-478a-9069-85d32b1d5bf9" (UID: "e023c878-7ddf-478a-9069-85d32b1d5bf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.014291 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.014335 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e023c878-7ddf-478a-9069-85d32b1d5bf9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.396506 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.396747 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.527482 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-645md" event={"ID":"e023c878-7ddf-478a-9069-85d32b1d5bf9","Type":"ContainerDied","Data":"cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87"} Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.527528 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf4a6ec30ea0591f9919e3d8394d5bb2bf3df4413bb6658ef450b2853063fe87" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.527593 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-645md" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.633651 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.634055 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerName="keystone-bootstrap" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634077 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerName="keystone-bootstrap" Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.634104 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634112 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" Mar 20 08:44:47 crc kubenswrapper[5136]: E0320 08:44:47.634131 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="init" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634140 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="init" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634347 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" containerName="keystone-bootstrap" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.634377 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ba4d56-2bee-4ab9-9acd-c7588d675a4b" containerName="dnsmasq-dns" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.635086 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640490 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640514 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640527 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640656 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640674 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.640960 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-bcnhn" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.653403 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.723957 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724022 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724060 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724238 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724344 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.724377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.825492 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.826551 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827181 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827582 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827690 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.827939 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.828150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.828516 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.838507 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.845538 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.848850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.849341 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.853220 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.865444 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.872832 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"keystone-69dd969bf5-bw8cr\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:47 crc kubenswrapper[5136]: I0320 08:44:47.992023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:48 crc kubenswrapper[5136]: I0320 08:44:48.410002 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 08:44:48 crc kubenswrapper[5136]: W0320 08:44:48.421577 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6492170d_c425_4bc1_8f26_b002ade2a30a.slice/crio-8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720 WatchSource:0}: Error finding container 8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720: Status 404 returned error can't find the container with id 8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720 Mar 20 08:44:48 crc kubenswrapper[5136]: I0320 08:44:48.540682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerStarted","Data":"8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720"} Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.054165 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.099439 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.297195 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.548161 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerStarted","Data":"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b"} Mar 20 08:44:49 crc kubenswrapper[5136]: I0320 08:44:49.566670 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-69dd969bf5-bw8cr" podStartSLOduration=2.5666499959999998 podStartE2EDuration="2.566649996s" podCreationTimestamp="2026-03-20 08:44:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:44:49.564027284 +0000 UTC m=+6921.823338445" watchObservedRunningTime="2026-03-20 08:44:49.566649996 +0000 UTC m=+6921.825961157" Mar 20 08:44:50 crc kubenswrapper[5136]: I0320 08:44:50.554148 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2kvt9" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" containerID="cri-o://1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" gracePeriod=2 Mar 20 08:44:50 crc kubenswrapper[5136]: I0320 08:44:50.557955 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.030525 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.180788 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") pod \"37fd264e-9020-4030-9f75-946d4f31cab0\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.181113 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") pod \"37fd264e-9020-4030-9f75-946d4f31cab0\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.181248 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") pod \"37fd264e-9020-4030-9f75-946d4f31cab0\" (UID: \"37fd264e-9020-4030-9f75-946d4f31cab0\") " Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.181953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities" (OuterVolumeSpecName: "utilities") pod "37fd264e-9020-4030-9f75-946d4f31cab0" (UID: "37fd264e-9020-4030-9f75-946d4f31cab0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.186216 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x" (OuterVolumeSpecName: "kube-api-access-ps95x") pod "37fd264e-9020-4030-9f75-946d4f31cab0" (UID: "37fd264e-9020-4030-9f75-946d4f31cab0"). InnerVolumeSpecName "kube-api-access-ps95x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.284007 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.284048 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps95x\" (UniqueName: \"kubernetes.io/projected/37fd264e-9020-4030-9f75-946d4f31cab0-kube-api-access-ps95x\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.309365 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37fd264e-9020-4030-9f75-946d4f31cab0" (UID: "37fd264e-9020-4030-9f75-946d4f31cab0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.385323 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37fd264e-9020-4030-9f75-946d4f31cab0-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566068 5136 generic.go:334] "Generic (PLEG): container finished" podID="37fd264e-9020-4030-9f75-946d4f31cab0" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" exitCode=0 Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566114 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7"} Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566156 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2kvt9" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2kvt9" event={"ID":"37fd264e-9020-4030-9f75-946d4f31cab0","Type":"ContainerDied","Data":"3fe0fe28b244286c6b840a87150082973c5021674f51e6c65a29f0d0dfa6ccf7"} Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.566199 5136 scope.go:117] "RemoveContainer" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.584392 5136 scope.go:117] "RemoveContainer" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.617978 5136 scope.go:117] "RemoveContainer" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.631166 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.639265 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2kvt9"] Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.673001 5136 scope.go:117] "RemoveContainer" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" Mar 20 08:44:51 crc kubenswrapper[5136]: E0320 08:44:51.682971 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7\": container with ID starting with 1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7 not found: ID does not exist" containerID="1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.683019 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7"} err="failed to get container status \"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7\": rpc error: code = NotFound desc = could not find container \"1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7\": container with ID starting with 1bad74a1748dcb96163246cfacb414b90dca204d11fe29b6414f5067f21c54b7 not found: ID does not exist" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.683044 5136 scope.go:117] "RemoveContainer" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" Mar 20 08:44:51 crc kubenswrapper[5136]: E0320 08:44:51.686912 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4\": container with ID starting with 02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4 not found: ID does not exist" containerID="02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.686941 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4"} err="failed to get container status \"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4\": rpc error: code = NotFound desc = could not find container \"02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4\": container with ID starting with 02469795a64a79faedad55246771d8971c722d27d35c2a9e11e5225023dc40f4 not found: ID does not exist" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.686961 5136 scope.go:117] "RemoveContainer" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" Mar 20 08:44:51 crc kubenswrapper[5136]: E0320 08:44:51.689014 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d\": container with ID starting with d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d not found: ID does not exist" containerID="d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d" Mar 20 08:44:51 crc kubenswrapper[5136]: I0320 08:44:51.689067 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d"} err="failed to get container status \"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d\": rpc error: code = NotFound desc = could not find container \"d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d\": container with ID starting with d4b5aab2d242a757862e146aff41cc85a5d4d33a3dd085bf00cfa5846381a42d not found: ID does not exist" Mar 20 08:44:52 crc kubenswrapper[5136]: I0320 08:44:52.411542 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" path="/var/lib/kubelet/pods/37fd264e-9020-4030-9f75-946d4f31cab0/volumes" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134023 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g"] Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.134631 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134645 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.134675 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-utilities" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134682 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-utilities" Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.134704 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-content" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134713 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="extract-content" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.134940 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="37fd264e-9020-4030-9f75-946d4f31cab0" containerName="registry-server" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.135603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.137687 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.141839 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.145319 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g"] Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.280037 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.280253 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.280418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.381693 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.381776 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.381911 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.383019 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.388619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.397128 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:00 crc kubenswrapper[5136]: E0320 08:45:00.397670 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.402877 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"collect-profiles-29566605-wpt4g\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.496343 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:00 crc kubenswrapper[5136]: I0320 08:45:00.998225 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g"] Mar 20 08:45:01 crc kubenswrapper[5136]: I0320 08:45:01.662370 5136 generic.go:334] "Generic (PLEG): container finished" podID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerID="d06383ead678a648b0b20646d0d0c9fe0235389efbb6ca7e052c4c75cf3a52ff" exitCode=0 Mar 20 08:45:01 crc kubenswrapper[5136]: I0320 08:45:01.662563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" event={"ID":"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc","Type":"ContainerDied","Data":"d06383ead678a648b0b20646d0d0c9fe0235389efbb6ca7e052c4c75cf3a52ff"} Mar 20 08:45:01 crc kubenswrapper[5136]: I0320 08:45:01.662780 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" event={"ID":"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc","Type":"ContainerStarted","Data":"56ef9583bd6d5b02088561c903571ef36752c229c798b0b23fe1cf15c6181eb2"} Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.013642 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.125609 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") pod \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.126155 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") pod \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.126352 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") pod \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\" (UID: \"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc\") " Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.126756 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" (UID: "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.130971 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" (UID: "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.131347 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb" (OuterVolumeSpecName: "kube-api-access-t9bpb") pod "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" (UID: "3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc"). InnerVolumeSpecName "kube-api-access-t9bpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.227970 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.228008 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.228020 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9bpb\" (UniqueName: \"kubernetes.io/projected/3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc-kube-api-access-t9bpb\") on node \"crc\" DevicePath \"\"" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.678558 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" event={"ID":"3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc","Type":"ContainerDied","Data":"56ef9583bd6d5b02088561c903571ef36752c229c798b0b23fe1cf15c6181eb2"} Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.678603 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ef9583bd6d5b02088561c903571ef36752c229c798b0b23fe1cf15c6181eb2" Mar 20 08:45:03 crc kubenswrapper[5136]: I0320 08:45:03.678671 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566605-wpt4g" Mar 20 08:45:04 crc kubenswrapper[5136]: I0320 08:45:04.083926 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:45:04 crc kubenswrapper[5136]: I0320 08:45:04.097796 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566560-8gmtl"] Mar 20 08:45:04 crc kubenswrapper[5136]: I0320 08:45:04.416672 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ad33e9-cb6b-450c-9703-8d6e379f3075" path="/var/lib/kubelet/pods/90ad33e9-cb6b-450c-9703-8d6e379f3075/volumes" Mar 20 08:45:14 crc kubenswrapper[5136]: I0320 08:45:14.396750 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:14 crc kubenswrapper[5136]: E0320 08:45:14.397522 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:19 crc kubenswrapper[5136]: I0320 08:45:19.555747 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.435055 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:45:21 crc kubenswrapper[5136]: E0320 08:45:21.435485 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerName="collect-profiles" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.435498 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerName="collect-profiles" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.435664 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b4cb3e5-78fa-4c21-ae4f-79b54fe610cc" containerName="collect-profiles" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.436250 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.438928 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-g94cv" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.439215 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.439729 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.461325 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564522 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564542 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.564681 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667766 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667855 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.667893 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.669070 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.675249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.675441 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.686693 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"openstackclient\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " pod="openstack/openstackclient" Mar 20 08:45:21 crc kubenswrapper[5136]: I0320 08:45:21.761522 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:45:22 crc kubenswrapper[5136]: I0320 08:45:22.192982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:45:22 crc kubenswrapper[5136]: I0320 08:45:22.838289 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f874f73-4453-44c8-b1d9-52559489bead","Type":"ContainerStarted","Data":"f5c09e60aafc3bfb4497c7d4524c1440cbf4ea1f7cf2061ba6a49655e1671665"} Mar 20 08:45:26 crc kubenswrapper[5136]: I0320 08:45:26.397021 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:26 crc kubenswrapper[5136]: E0320 08:45:26.397618 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:33 crc kubenswrapper[5136]: I0320 08:45:33.932987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"8f874f73-4453-44c8-b1d9-52559489bead","Type":"ContainerStarted","Data":"9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e"} Mar 20 08:45:33 crc kubenswrapper[5136]: I0320 08:45:33.959204 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.943973584 podStartE2EDuration="12.959186258s" podCreationTimestamp="2026-03-20 08:45:21 +0000 UTC" firstStartedPulling="2026-03-20 08:45:22.202676542 +0000 UTC m=+6954.461987693" lastFinishedPulling="2026-03-20 08:45:33.217889216 +0000 UTC m=+6965.477200367" observedRunningTime="2026-03-20 08:45:33.95025351 +0000 UTC m=+6966.209564661" watchObservedRunningTime="2026-03-20 08:45:33.959186258 +0000 UTC m=+6966.218497409" Mar 20 08:45:38 crc kubenswrapper[5136]: I0320 08:45:38.402621 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:38 crc kubenswrapper[5136]: E0320 08:45:38.403641 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:45:38 crc kubenswrapper[5136]: I0320 08:45:38.558624 5136 scope.go:117] "RemoveContainer" containerID="a9399ede282cd1d4b161abddeaa1193070be8003a67d2c8907749c2c5dadab78" Mar 20 08:45:53 crc kubenswrapper[5136]: I0320 08:45:53.397442 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:45:53 crc kubenswrapper[5136]: E0320 08:45:53.398258 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.151637 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.154369 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.157070 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.157147 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.157234 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.164355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.198043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"auto-csr-approver-29566606-rdf48\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.300282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"auto-csr-approver-29566606-rdf48\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.318737 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"auto-csr-approver-29566606-rdf48\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.481807 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:00 crc kubenswrapper[5136]: I0320 08:46:00.938661 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:46:01 crc kubenswrapper[5136]: I0320 08:46:01.195840 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566606-rdf48" event={"ID":"895f2400-9932-4967-831f-f047de8c0f63","Type":"ContainerStarted","Data":"25ccbedf7f52d511dbfa63676fd266d9b896c3045f6514a33e16f8bec1197edd"} Mar 20 08:46:03 crc kubenswrapper[5136]: I0320 08:46:03.212923 5136 generic.go:334] "Generic (PLEG): container finished" podID="895f2400-9932-4967-831f-f047de8c0f63" containerID="ee7fc0aa7d70c450967fddf706c56fe4af54a2ede94af9ae1aa1f75f2c772efc" exitCode=0 Mar 20 08:46:03 crc kubenswrapper[5136]: I0320 08:46:03.213020 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566606-rdf48" event={"ID":"895f2400-9932-4967-831f-f047de8c0f63","Type":"ContainerDied","Data":"ee7fc0aa7d70c450967fddf706c56fe4af54a2ede94af9ae1aa1f75f2c772efc"} Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.530474 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.675785 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") pod \"895f2400-9932-4967-831f-f047de8c0f63\" (UID: \"895f2400-9932-4967-831f-f047de8c0f63\") " Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.686831 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk" (OuterVolumeSpecName: "kube-api-access-7wgwk") pod "895f2400-9932-4967-831f-f047de8c0f63" (UID: "895f2400-9932-4967-831f-f047de8c0f63"). InnerVolumeSpecName "kube-api-access-7wgwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:46:04 crc kubenswrapper[5136]: I0320 08:46:04.777654 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wgwk\" (UniqueName: \"kubernetes.io/projected/895f2400-9932-4967-831f-f047de8c0f63-kube-api-access-7wgwk\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.226910 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566606-rdf48" event={"ID":"895f2400-9932-4967-831f-f047de8c0f63","Type":"ContainerDied","Data":"25ccbedf7f52d511dbfa63676fd266d9b896c3045f6514a33e16f8bec1197edd"} Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.226947 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25ccbedf7f52d511dbfa63676fd266d9b896c3045f6514a33e16f8bec1197edd" Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.226990 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566606-rdf48" Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.606533 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:46:05 crc kubenswrapper[5136]: I0320 08:46:05.613413 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566600-2m6nn"] Mar 20 08:46:06 crc kubenswrapper[5136]: I0320 08:46:06.398086 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:06 crc kubenswrapper[5136]: E0320 08:46:06.399364 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:06 crc kubenswrapper[5136]: I0320 08:46:06.408585 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3480cf66-9f91-4ce8-924c-0f730044c0de" path="/var/lib/kubelet/pods/3480cf66-9f91-4ce8-924c-0f730044c0de/volumes" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.379879 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:07 crc kubenswrapper[5136]: E0320 08:46:07.380293 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="895f2400-9932-4967-831f-f047de8c0f63" containerName="oc" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.380309 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="895f2400-9932-4967-831f-f047de8c0f63" containerName="oc" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.380553 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="895f2400-9932-4967-831f-f047de8c0f63" containerName="oc" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.381988 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.385546 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.530724 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.530800 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.531020 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632427 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.632958 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.633114 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.656331 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"redhat-marketplace-nkp9h\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:07 crc kubenswrapper[5136]: I0320 08:46:07.702847 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:08 crc kubenswrapper[5136]: I0320 08:46:08.164880 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:08 crc kubenswrapper[5136]: I0320 08:46:08.253706 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerStarted","Data":"439cdaa884cb19e124eb2cc76843dcdec0c452cad325408ad2001b7577772cf2"} Mar 20 08:46:09 crc kubenswrapper[5136]: I0320 08:46:09.263208 5136 generic.go:334] "Generic (PLEG): container finished" podID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" exitCode=0 Mar 20 08:46:09 crc kubenswrapper[5136]: I0320 08:46:09.263327 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88"} Mar 20 08:46:10 crc kubenswrapper[5136]: I0320 08:46:10.273357 5136 generic.go:334] "Generic (PLEG): container finished" podID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" exitCode=0 Mar 20 08:46:10 crc kubenswrapper[5136]: I0320 08:46:10.273471 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82"} Mar 20 08:46:11 crc kubenswrapper[5136]: I0320 08:46:11.283671 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerStarted","Data":"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50"} Mar 20 08:46:11 crc kubenswrapper[5136]: I0320 08:46:11.305848 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nkp9h" podStartSLOduration=2.87280569 podStartE2EDuration="4.305827174s" podCreationTimestamp="2026-03-20 08:46:07 +0000 UTC" firstStartedPulling="2026-03-20 08:46:09.26495414 +0000 UTC m=+7001.524265291" lastFinishedPulling="2026-03-20 08:46:10.697975624 +0000 UTC m=+7002.957286775" observedRunningTime="2026-03-20 08:46:11.30440473 +0000 UTC m=+7003.563715901" watchObservedRunningTime="2026-03-20 08:46:11.305827174 +0000 UTC m=+7003.565138325" Mar 20 08:46:17 crc kubenswrapper[5136]: I0320 08:46:17.703767 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:17 crc kubenswrapper[5136]: I0320 08:46:17.704334 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:17 crc kubenswrapper[5136]: I0320 08:46:17.747733 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:18 crc kubenswrapper[5136]: I0320 08:46:18.381809 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:18 crc kubenswrapper[5136]: I0320 08:46:18.426190 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:19 crc kubenswrapper[5136]: I0320 08:46:19.396868 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:19 crc kubenswrapper[5136]: E0320 08:46:19.397454 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.349692 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nkp9h" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" containerID="cri-o://b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" gracePeriod=2 Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.826510 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.978600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") pod \"53d1a23a-d4a8-45ec-969b-627514c8be8f\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.978784 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") pod \"53d1a23a-d4a8-45ec-969b-627514c8be8f\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.978877 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") pod \"53d1a23a-d4a8-45ec-969b-627514c8be8f\" (UID: \"53d1a23a-d4a8-45ec-969b-627514c8be8f\") " Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.979963 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities" (OuterVolumeSpecName: "utilities") pod "53d1a23a-d4a8-45ec-969b-627514c8be8f" (UID: "53d1a23a-d4a8-45ec-969b-627514c8be8f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:46:20 crc kubenswrapper[5136]: I0320 08:46:20.984916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd" (OuterVolumeSpecName: "kube-api-access-vwwnd") pod "53d1a23a-d4a8-45ec-969b-627514c8be8f" (UID: "53d1a23a-d4a8-45ec-969b-627514c8be8f"). InnerVolumeSpecName "kube-api-access-vwwnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.007462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53d1a23a-d4a8-45ec-969b-627514c8be8f" (UID: "53d1a23a-d4a8-45ec-969b-627514c8be8f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.081183 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.081216 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53d1a23a-d4a8-45ec-969b-627514c8be8f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.081228 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwwnd\" (UniqueName: \"kubernetes.io/projected/53d1a23a-d4a8-45ec-969b-627514c8be8f-kube-api-access-vwwnd\") on node \"crc\" DevicePath \"\"" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.357924 5136 generic.go:334] "Generic (PLEG): container finished" podID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" exitCode=0 Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.357974 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50"} Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.358003 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nkp9h" event={"ID":"53d1a23a-d4a8-45ec-969b-627514c8be8f","Type":"ContainerDied","Data":"439cdaa884cb19e124eb2cc76843dcdec0c452cad325408ad2001b7577772cf2"} Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.358048 5136 scope.go:117] "RemoveContainer" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.358196 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nkp9h" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.377201 5136 scope.go:117] "RemoveContainer" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.397846 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.406983 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nkp9h"] Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.421213 5136 scope.go:117] "RemoveContainer" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.442329 5136 scope.go:117] "RemoveContainer" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" Mar 20 08:46:21 crc kubenswrapper[5136]: E0320 08:46:21.442775 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50\": container with ID starting with b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50 not found: ID does not exist" containerID="b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.442830 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50"} err="failed to get container status \"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50\": rpc error: code = NotFound desc = could not find container \"b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50\": container with ID starting with b7f8bf19888cff630fee1f8b202dbafde67c867d254508da77659ad55cf1fd50 not found: ID does not exist" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.442857 5136 scope.go:117] "RemoveContainer" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" Mar 20 08:46:21 crc kubenswrapper[5136]: E0320 08:46:21.443199 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82\": container with ID starting with e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82 not found: ID does not exist" containerID="e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.443233 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82"} err="failed to get container status \"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82\": rpc error: code = NotFound desc = could not find container \"e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82\": container with ID starting with e3a0ad9bfbd150e683c1f7ae42e2f702dd0a9e0ce6998bf95ef9811a20000d82 not found: ID does not exist" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.443247 5136 scope.go:117] "RemoveContainer" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" Mar 20 08:46:21 crc kubenswrapper[5136]: E0320 08:46:21.443503 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88\": container with ID starting with c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88 not found: ID does not exist" containerID="c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88" Mar 20 08:46:21 crc kubenswrapper[5136]: I0320 08:46:21.443525 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88"} err="failed to get container status \"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88\": rpc error: code = NotFound desc = could not find container \"c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88\": container with ID starting with c976afd718ab067c192cc57025ff84e5c588dd836c15055f7af944ac48b81c88 not found: ID does not exist" Mar 20 08:46:22 crc kubenswrapper[5136]: I0320 08:46:22.405863 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" path="/var/lib/kubelet/pods/53d1a23a-d4a8-45ec-969b-627514c8be8f/volumes" Mar 20 08:46:33 crc kubenswrapper[5136]: I0320 08:46:33.398252 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:33 crc kubenswrapper[5136]: E0320 08:46:33.399558 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:38 crc kubenswrapper[5136]: I0320 08:46:38.628217 5136 scope.go:117] "RemoveContainer" containerID="ada9ce7b7b306f2b5dbbf312318f5ac5adc2a593ce372df15119878b742a8edb" Mar 20 08:46:47 crc kubenswrapper[5136]: I0320 08:46:47.396991 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:47 crc kubenswrapper[5136]: E0320 08:46:47.397805 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.340302 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:46:58 crc kubenswrapper[5136]: E0320 08:46:58.341111 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-content" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341124 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-content" Mar 20 08:46:58 crc kubenswrapper[5136]: E0320 08:46:58.341145 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-utilities" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341151 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="extract-utilities" Mar 20 08:46:58 crc kubenswrapper[5136]: E0320 08:46:58.341163 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341169 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341308 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53d1a23a-d4a8-45ec-969b-627514c8be8f" containerName="registry-server" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.341877 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.351110 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.385918 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.386030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.447448 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.448906 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.451489 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.459015 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487151 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487180 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.487285 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.488002 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.505604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"neutron-db-create-gqtht\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.588999 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.589076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.590044 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.605440 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"neutron-24c6-account-create-update-625nw\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.673622 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:46:58 crc kubenswrapper[5136]: I0320 08:46:58.768392 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.087563 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.212522 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:46:59 crc kubenswrapper[5136]: W0320 08:46:59.215699 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06c8bf45_d717_45f4_9679_7f6b69835f8a.slice/crio-171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd WatchSource:0}: Error finding container 171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd: Status 404 returned error can't find the container with id 171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.396172 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:46:59 crc kubenswrapper[5136]: E0320 08:46:59.396623 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.653295 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerID="87d4064c210f2c8ecf2546f67dd8fe9ef436d4f291209d0fa6a7f5ba97b6e5e4" exitCode=0 Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.653347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gqtht" event={"ID":"a4aab638-4f7d-46a0-bc82-10fe569b56db","Type":"ContainerDied","Data":"87d4064c210f2c8ecf2546f67dd8fe9ef436d4f291209d0fa6a7f5ba97b6e5e4"} Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.653396 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gqtht" event={"ID":"a4aab638-4f7d-46a0-bc82-10fe569b56db","Type":"ContainerStarted","Data":"ec07ab9166ac551f71d621d181e619f44a7113103b0c42e99304c37347fd9055"} Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.654978 5136 generic.go:334] "Generic (PLEG): container finished" podID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerID="4b1f554c7f496a2460aeebf430477f38851975d2eafc2fb7735f082f6ef9d928" exitCode=0 Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.655022 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-625nw" event={"ID":"06c8bf45-d717-45f4-9679-7f6b69835f8a","Type":"ContainerDied","Data":"4b1f554c7f496a2460aeebf430477f38851975d2eafc2fb7735f082f6ef9d928"} Mar 20 08:46:59 crc kubenswrapper[5136]: I0320 08:46:59.655061 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-625nw" event={"ID":"06c8bf45-d717-45f4-9679-7f6b69835f8a","Type":"ContainerStarted","Data":"171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd"} Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.118645 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.126239 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.235746 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") pod \"06c8bf45-d717-45f4-9679-7f6b69835f8a\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.235843 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") pod \"a4aab638-4f7d-46a0-bc82-10fe569b56db\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.235876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") pod \"06c8bf45-d717-45f4-9679-7f6b69835f8a\" (UID: \"06c8bf45-d717-45f4-9679-7f6b69835f8a\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.236433 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4aab638-4f7d-46a0-bc82-10fe569b56db" (UID: "a4aab638-4f7d-46a0-bc82-10fe569b56db"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.236479 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06c8bf45-d717-45f4-9679-7f6b69835f8a" (UID: "06c8bf45-d717-45f4-9679-7f6b69835f8a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.236768 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") pod \"a4aab638-4f7d-46a0-bc82-10fe569b56db\" (UID: \"a4aab638-4f7d-46a0-bc82-10fe569b56db\") " Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.237381 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06c8bf45-d717-45f4-9679-7f6b69835f8a-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.237416 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4aab638-4f7d-46a0-bc82-10fe569b56db-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.241303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc" (OuterVolumeSpecName: "kube-api-access-lj2lc") pod "06c8bf45-d717-45f4-9679-7f6b69835f8a" (UID: "06c8bf45-d717-45f4-9679-7f6b69835f8a"). InnerVolumeSpecName "kube-api-access-lj2lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.241494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r" (OuterVolumeSpecName: "kube-api-access-lk79r") pod "a4aab638-4f7d-46a0-bc82-10fe569b56db" (UID: "a4aab638-4f7d-46a0-bc82-10fe569b56db"). InnerVolumeSpecName "kube-api-access-lk79r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.339278 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2lc\" (UniqueName: \"kubernetes.io/projected/06c8bf45-d717-45f4-9679-7f6b69835f8a-kube-api-access-lj2lc\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.339634 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk79r\" (UniqueName: \"kubernetes.io/projected/a4aab638-4f7d-46a0-bc82-10fe569b56db-kube-api-access-lk79r\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.678787 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-gqtht" event={"ID":"a4aab638-4f7d-46a0-bc82-10fe569b56db","Type":"ContainerDied","Data":"ec07ab9166ac551f71d621d181e619f44a7113103b0c42e99304c37347fd9055"} Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.679330 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec07ab9166ac551f71d621d181e619f44a7113103b0c42e99304c37347fd9055" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.678867 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-gqtht" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.682034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-625nw" event={"ID":"06c8bf45-d717-45f4-9679-7f6b69835f8a","Type":"ContainerDied","Data":"171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd"} Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.682079 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171ea8235c48d1c686bc8225ddfa1307d597d34ec63ce078d04f4c3df707badd" Mar 20 08:47:01 crc kubenswrapper[5136]: I0320 08:47:01.682143 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-625nw" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.751857 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:47:03 crc kubenswrapper[5136]: E0320 08:47:03.752539 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerName="mariadb-account-create-update" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752555 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerName="mariadb-account-create-update" Mar 20 08:47:03 crc kubenswrapper[5136]: E0320 08:47:03.752577 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerName="mariadb-database-create" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752587 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerName="mariadb-database-create" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752855 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" containerName="mariadb-database-create" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.752882 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" containerName="mariadb-account-create-update" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.753547 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.756291 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v6fw7" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.756541 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.756863 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.762352 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.891381 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.891537 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.891589 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.992291 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.992549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.992666 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:03 crc kubenswrapper[5136]: I0320 08:47:03.999119 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.001360 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.017762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"neutron-db-sync-grfwk\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.081646 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.528678 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.707342 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerStarted","Data":"340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c"} Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.707662 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerStarted","Data":"71a2b577b4f8974b06ad9600916dc0bf420b74f2ea8b24f702012664dbd73634"} Mar 20 08:47:04 crc kubenswrapper[5136]: I0320 08:47:04.725204 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-grfwk" podStartSLOduration=1.725184265 podStartE2EDuration="1.725184265s" podCreationTimestamp="2026-03-20 08:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:04.721621694 +0000 UTC m=+7056.980932845" watchObservedRunningTime="2026-03-20 08:47:04.725184265 +0000 UTC m=+7056.984495416" Mar 20 08:47:09 crc kubenswrapper[5136]: I0320 08:47:09.751427 5136 generic.go:334] "Generic (PLEG): container finished" podID="4c6db9e6-4059-4911-b008-680848fffdbe" containerID="340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c" exitCode=0 Mar 20 08:47:09 crc kubenswrapper[5136]: I0320 08:47:09.751536 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerDied","Data":"340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c"} Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.111419 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.116108 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") pod \"4c6db9e6-4059-4911-b008-680848fffdbe\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.116232 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") pod \"4c6db9e6-4059-4911-b008-680848fffdbe\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.116363 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") pod \"4c6db9e6-4059-4911-b008-680848fffdbe\" (UID: \"4c6db9e6-4059-4911-b008-680848fffdbe\") " Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.121743 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2" (OuterVolumeSpecName: "kube-api-access-drgz2") pod "4c6db9e6-4059-4911-b008-680848fffdbe" (UID: "4c6db9e6-4059-4911-b008-680848fffdbe"). InnerVolumeSpecName "kube-api-access-drgz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.147495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config" (OuterVolumeSpecName: "config") pod "4c6db9e6-4059-4911-b008-680848fffdbe" (UID: "4c6db9e6-4059-4911-b008-680848fffdbe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.147786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4c6db9e6-4059-4911-b008-680848fffdbe" (UID: "4c6db9e6-4059-4911-b008-680848fffdbe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.217373 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.217405 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drgz2\" (UniqueName: \"kubernetes.io/projected/4c6db9e6-4059-4911-b008-680848fffdbe-kube-api-access-drgz2\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.217417 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/4c6db9e6-4059-4911-b008-680848fffdbe-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.773255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-grfwk" event={"ID":"4c6db9e6-4059-4911-b008-680848fffdbe","Type":"ContainerDied","Data":"71a2b577b4f8974b06ad9600916dc0bf420b74f2ea8b24f702012664dbd73634"} Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.773308 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71a2b577b4f8974b06ad9600916dc0bf420b74f2ea8b24f702012664dbd73634" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.773332 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-grfwk" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.937055 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:47:11 crc kubenswrapper[5136]: E0320 08:47:11.937588 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" containerName="neutron-db-sync" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.937603 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" containerName="neutron-db-sync" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.937828 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" containerName="neutron-db-sync" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.938662 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:11 crc kubenswrapper[5136]: I0320 08:47:11.951438 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.022227 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.023620 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.027807 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.027956 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.028147 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-v6fw7" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.028363 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032551 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032607 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032683 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032710 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032785 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032824 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032882 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032940 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.032991 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.043173 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133614 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133698 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133769 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133854 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133907 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.133953 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134005 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134186 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.134714 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.135114 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.135249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.135355 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.143303 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.143503 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.144388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.148679 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.151489 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"dnsmasq-dns-6968b46cdc-n6kjz\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.156691 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"neutron-86b9496f44-69p9k\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.249765 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.251423 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.254417 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.267426 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.337540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.338657 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.338734 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.338779 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.439979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.440366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.440411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.441173 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.441424 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.459479 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"community-operators-qhpzm\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.577270 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.752678 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:47:12 crc kubenswrapper[5136]: I0320 08:47:12.792011 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerStarted","Data":"676c8c5cdb1dfc8268622e52dd2300c796e32f8a976b6c157621d92d03db62f0"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.042518 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.070143 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:13 crc kubenswrapper[5136]: W0320 08:47:13.134930 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05900948_fec4_4c61_846c_648b8e5cf6b2.slice/crio-4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83 WatchSource:0}: Error finding container 4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83: Status 404 returned error can't find the container with id 4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83 Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.801571 5136 generic.go:334] "Generic (PLEG): container finished" podID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerID="8f73d6f969a6dcfad143a7dea5aee18ef87be55ad10ac352de6c2af3efe4415d" exitCode=0 Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.801834 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerDied","Data":"8f73d6f969a6dcfad143a7dea5aee18ef87be55ad10ac352de6c2af3efe4415d"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.804071 5136 generic.go:334] "Generic (PLEG): container finished" podID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" exitCode=0 Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.804644 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.804681 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerStarted","Data":"4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerStarted","Data":"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806390 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerStarted","Data":"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806404 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerStarted","Data":"33d00c289e95766dad520f73944beacae4014f8a82a254a5b586260bbef5a1d4"} Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.806531 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:13 crc kubenswrapper[5136]: I0320 08:47:13.869461 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-86b9496f44-69p9k" podStartSLOduration=1.869446019 podStartE2EDuration="1.869446019s" podCreationTimestamp="2026-03-20 08:47:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:13.852381119 +0000 UTC m=+7066.111692270" watchObservedRunningTime="2026-03-20 08:47:13.869446019 +0000 UTC m=+7066.128757170" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.308545 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.310314 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.312232 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.319177 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.325206 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.396924 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:14 crc kubenswrapper[5136]: E0320 08:47:14.397237 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472307 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472346 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.472423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.473114 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.473967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.474080 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575434 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575509 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575528 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575673 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575848 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.575914 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.581036 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.581325 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.589532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.590564 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.591906 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.593124 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.598775 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"neutron-7b494fbb57-cd7nw\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.630062 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.820087 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerStarted","Data":"85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2"} Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.820414 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.822308 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerStarted","Data":"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f"} Mar 20 08:47:14 crc kubenswrapper[5136]: I0320 08:47:14.845222 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" podStartSLOduration=3.8452012030000002 podStartE2EDuration="3.845201203s" podCreationTimestamp="2026-03-20 08:47:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:14.838425794 +0000 UTC m=+7067.097736945" watchObservedRunningTime="2026-03-20 08:47:14.845201203 +0000 UTC m=+7067.104512354" Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.166383 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 08:47:15 crc kubenswrapper[5136]: W0320 08:47:15.167063 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305f3f22_2f38_44c5_8e63_1f028edce331.slice/crio-98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0 WatchSource:0}: Error finding container 98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0: Status 404 returned error can't find the container with id 98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0 Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerStarted","Data":"b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850296 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerStarted","Data":"293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850311 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.850320 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerStarted","Data":"98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.853180 5136 generic.go:334] "Generic (PLEG): container finished" podID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" exitCode=0 Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.853314 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f"} Mar 20 08:47:15 crc kubenswrapper[5136]: I0320 08:47:15.867354 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b494fbb57-cd7nw" podStartSLOduration=1.867338088 podStartE2EDuration="1.867338088s" podCreationTimestamp="2026-03-20 08:47:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:47:15.865244183 +0000 UTC m=+7068.124555334" watchObservedRunningTime="2026-03-20 08:47:15.867338088 +0000 UTC m=+7068.126649239" Mar 20 08:47:16 crc kubenswrapper[5136]: I0320 08:47:16.862081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerStarted","Data":"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305"} Mar 20 08:47:16 crc kubenswrapper[5136]: I0320 08:47:16.885973 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qhpzm" podStartSLOduration=2.33199341 podStartE2EDuration="4.885956703s" podCreationTimestamp="2026-03-20 08:47:12 +0000 UTC" firstStartedPulling="2026-03-20 08:47:13.805647019 +0000 UTC m=+7066.064958170" lastFinishedPulling="2026-03-20 08:47:16.359610302 +0000 UTC m=+7068.618921463" observedRunningTime="2026-03-20 08:47:16.877944734 +0000 UTC m=+7069.137255885" watchObservedRunningTime="2026-03-20 08:47:16.885956703 +0000 UTC m=+7069.145267854" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.255962 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.337306 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.337561 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-777959d579-j5npb" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" containerID="cri-o://f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" gracePeriod=10 Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.578452 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.578857 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.648504 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.824436 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918224 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" exitCode=0 Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918281 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-777959d579-j5npb" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918336 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerDied","Data":"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288"} Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918365 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-777959d579-j5npb" event={"ID":"bd5e6126-8bb0-497c-9a3a-856e96128e83","Type":"ContainerDied","Data":"148add14ed98710953a69caa68c00199c9d76d0d24f031860e3e3fd6ef37946b"} Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.918387 5136 scope.go:117] "RemoveContainer" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928136 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928451 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928483 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.928590 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") pod \"bd5e6126-8bb0-497c-9a3a-856e96128e83\" (UID: \"bd5e6126-8bb0-497c-9a3a-856e96128e83\") " Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.938229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb" (OuterVolumeSpecName: "kube-api-access-hzdjb") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "kube-api-access-hzdjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.940868 5136 scope.go:117] "RemoveContainer" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.964372 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.976894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config" (OuterVolumeSpecName: "config") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.987781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.990259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:22 crc kubenswrapper[5136]: I0320 08:47:22.992079 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd5e6126-8bb0-497c-9a3a-856e96128e83" (UID: "bd5e6126-8bb0-497c-9a3a-856e96128e83"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.009955 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030709 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030735 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030766 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030777 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzdjb\" (UniqueName: \"kubernetes.io/projected/bd5e6126-8bb0-497c-9a3a-856e96128e83-kube-api-access-hzdjb\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.030786 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd5e6126-8bb0-497c-9a3a-856e96128e83-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.040401 5136 scope.go:117] "RemoveContainer" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" Mar 20 08:47:23 crc kubenswrapper[5136]: E0320 08:47:23.040916 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288\": container with ID starting with f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288 not found: ID does not exist" containerID="f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.040948 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288"} err="failed to get container status \"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288\": rpc error: code = NotFound desc = could not find container \"f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288\": container with ID starting with f9236230b9fa7da8ff26e29b3fe972125ee08ee7e480a9780d799c8339c58288 not found: ID does not exist" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.040967 5136 scope.go:117] "RemoveContainer" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" Mar 20 08:47:23 crc kubenswrapper[5136]: E0320 08:47:23.042515 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217\": container with ID starting with 721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217 not found: ID does not exist" containerID="721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.042559 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217"} err="failed to get container status \"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217\": rpc error: code = NotFound desc = could not find container \"721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217\": container with ID starting with 721920444b40b4f18631c1504b8c2ada795b095948db82ee30360288e3661217 not found: ID does not exist" Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.247475 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:47:23 crc kubenswrapper[5136]: I0320 08:47:23.255193 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-777959d579-j5npb"] Mar 20 08:47:24 crc kubenswrapper[5136]: I0320 08:47:24.407372 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" path="/var/lib/kubelet/pods/bd5e6126-8bb0-497c-9a3a-856e96128e83/volumes" Mar 20 08:47:24 crc kubenswrapper[5136]: I0320 08:47:24.934315 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qhpzm" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" containerID="cri-o://4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" gracePeriod=2 Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.379733 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.396994 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:25 crc kubenswrapper[5136]: E0320 08:47:25.397394 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.470163 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") pod \"05900948-fec4-4c61-846c-648b8e5cf6b2\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.470241 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") pod \"05900948-fec4-4c61-846c-648b8e5cf6b2\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.470342 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") pod \"05900948-fec4-4c61-846c-648b8e5cf6b2\" (UID: \"05900948-fec4-4c61-846c-648b8e5cf6b2\") " Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.471953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities" (OuterVolumeSpecName: "utilities") pod "05900948-fec4-4c61-846c-648b8e5cf6b2" (UID: "05900948-fec4-4c61-846c-648b8e5cf6b2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.475043 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp" (OuterVolumeSpecName: "kube-api-access-scxxp") pod "05900948-fec4-4c61-846c-648b8e5cf6b2" (UID: "05900948-fec4-4c61-846c-648b8e5cf6b2"). InnerVolumeSpecName "kube-api-access-scxxp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.572651 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.572682 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scxxp\" (UniqueName: \"kubernetes.io/projected/05900948-fec4-4c61-846c-648b8e5cf6b2-kube-api-access-scxxp\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.601308 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05900948-fec4-4c61-846c-648b8e5cf6b2" (UID: "05900948-fec4-4c61-846c-648b8e5cf6b2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.674347 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05900948-fec4-4c61-846c-648b8e5cf6b2-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944414 5136 generic.go:334] "Generic (PLEG): container finished" podID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" exitCode=0 Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944500 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305"} Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qhpzm" event={"ID":"05900948-fec4-4c61-846c-648b8e5cf6b2","Type":"ContainerDied","Data":"4213271971af3275b36a135c5b91b5e7835dfce4ad7bd980eb7f80a6cc2afa83"} Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944588 5136 scope.go:117] "RemoveContainer" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.944582 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qhpzm" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.963192 5136 scope.go:117] "RemoveContainer" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" Mar 20 08:47:25 crc kubenswrapper[5136]: I0320 08:47:25.979715 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.026519 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qhpzm"] Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.027280 5136 scope.go:117] "RemoveContainer" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.066185 5136 scope.go:117] "RemoveContainer" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" Mar 20 08:47:26 crc kubenswrapper[5136]: E0320 08:47:26.066594 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305\": container with ID starting with 4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305 not found: ID does not exist" containerID="4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.066638 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305"} err="failed to get container status \"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305\": rpc error: code = NotFound desc = could not find container \"4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305\": container with ID starting with 4f940521dd4f457e42d4916cc916dfba2eaa15f9d93925d75ccb9dbcefdb2305 not found: ID does not exist" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.066664 5136 scope.go:117] "RemoveContainer" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" Mar 20 08:47:26 crc kubenswrapper[5136]: E0320 08:47:26.067103 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f\": container with ID starting with 4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f not found: ID does not exist" containerID="4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.067130 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f"} err="failed to get container status \"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f\": rpc error: code = NotFound desc = could not find container \"4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f\": container with ID starting with 4db76246b1a215a792533d6b3bf71116de9e3852ac22d6c08255a9f7943e303f not found: ID does not exist" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.067146 5136 scope.go:117] "RemoveContainer" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" Mar 20 08:47:26 crc kubenswrapper[5136]: E0320 08:47:26.067535 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501\": container with ID starting with 41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501 not found: ID does not exist" containerID="41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.067557 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501"} err="failed to get container status \"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501\": rpc error: code = NotFound desc = could not find container \"41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501\": container with ID starting with 41b87269bcda0978ecb207cefa0a9c1baa54bb58a7066454e13df7e7f59a8501 not found: ID does not exist" Mar 20 08:47:26 crc kubenswrapper[5136]: I0320 08:47:26.406734 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" path="/var/lib/kubelet/pods/05900948-fec4-4c61-846c-648b8e5cf6b2/volumes" Mar 20 08:47:39 crc kubenswrapper[5136]: I0320 08:47:39.396515 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:39 crc kubenswrapper[5136]: E0320 08:47:39.397329 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:47:42 crc kubenswrapper[5136]: I0320 08:47:42.345667 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.640626 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.710397 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.710647 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86b9496f44-69p9k" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" containerID="cri-o://be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" gracePeriod=30 Mar 20 08:47:44 crc kubenswrapper[5136]: I0320 08:47:44.710803 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-86b9496f44-69p9k" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" containerID="cri-o://6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" gracePeriod=30 Mar 20 08:47:45 crc kubenswrapper[5136]: I0320 08:47:45.122330 5136 generic.go:334] "Generic (PLEG): container finished" podID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" exitCode=0 Mar 20 08:47:45 crc kubenswrapper[5136]: I0320 08:47:45.122428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerDied","Data":"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947"} Mar 20 08:47:48 crc kubenswrapper[5136]: I0320 08:47:48.850958 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049660 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049743 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.049953 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") pod \"6470043c-e2e0-4acd-8c90-5f38ffca2924\" (UID: \"6470043c-e2e0-4acd-8c90-5f38ffca2924\") " Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.067770 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.067792 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq" (OuterVolumeSpecName: "kube-api-access-8dxjq") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "kube-api-access-8dxjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.091683 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config" (OuterVolumeSpecName: "config") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.103795 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.140302 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6470043c-e2e0-4acd-8c90-5f38ffca2924" (UID: "6470043c-e2e0-4acd-8c90-5f38ffca2924"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151664 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151694 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151707 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dxjq\" (UniqueName: \"kubernetes.io/projected/6470043c-e2e0-4acd-8c90-5f38ffca2924-kube-api-access-8dxjq\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151718 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.151727 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6470043c-e2e0-4acd-8c90-5f38ffca2924-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163107 5136 generic.go:334] "Generic (PLEG): container finished" podID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" exitCode=0 Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163185 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerDied","Data":"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795"} Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163238 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-86b9496f44-69p9k" event={"ID":"6470043c-e2e0-4acd-8c90-5f38ffca2924","Type":"ContainerDied","Data":"33d00c289e95766dad520f73944beacae4014f8a82a254a5b586260bbef5a1d4"} Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163272 5136 scope.go:117] "RemoveContainer" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.163544 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-86b9496f44-69p9k" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.196067 5136 scope.go:117] "RemoveContainer" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.202226 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.209696 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-86b9496f44-69p9k"] Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.218677 5136 scope.go:117] "RemoveContainer" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" Mar 20 08:47:49 crc kubenswrapper[5136]: E0320 08:47:49.219202 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947\": container with ID starting with 6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947 not found: ID does not exist" containerID="6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.219313 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947"} err="failed to get container status \"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947\": rpc error: code = NotFound desc = could not find container \"6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947\": container with ID starting with 6085d3cfa9be236fd5d06b5dab1066999d8fc73ea01064dc1e7a9abb8cfd0947 not found: ID does not exist" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.219445 5136 scope.go:117] "RemoveContainer" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" Mar 20 08:47:49 crc kubenswrapper[5136]: E0320 08:47:49.220122 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795\": container with ID starting with be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795 not found: ID does not exist" containerID="be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795" Mar 20 08:47:49 crc kubenswrapper[5136]: I0320 08:47:49.220156 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795"} err="failed to get container status \"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795\": rpc error: code = NotFound desc = could not find container \"be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795\": container with ID starting with be07aa939b9765dd76bed7c8c3352a387e68aa345f63f54f1be9a1351c044795 not found: ID does not exist" Mar 20 08:47:50 crc kubenswrapper[5136]: I0320 08:47:50.408268 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" path="/var/lib/kubelet/pods/6470043c-e2e0-4acd-8c90-5f38ffca2924/volumes" Mar 20 08:47:53 crc kubenswrapper[5136]: I0320 08:47:53.396661 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:47:53 crc kubenswrapper[5136]: E0320 08:47:53.397130 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.142725 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143417 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-utilities" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143434 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-utilities" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143452 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="init" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143462 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="init" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143494 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143502 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143526 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143534 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143549 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-content" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143557 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="extract-content" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143570 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143577 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" Mar 20 08:48:00 crc kubenswrapper[5136]: E0320 08:48:00.143589 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143597 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143792 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-httpd" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143826 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd5e6126-8bb0-497c-9a3a-856e96128e83" containerName="dnsmasq-dns" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143836 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="05900948-fec4-4c61-846c-648b8e5cf6b2" containerName="registry-server" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.143853 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6470043c-e2e0-4acd-8c90-5f38ffca2924" containerName="neutron-api" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.144545 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.147587 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.147757 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.148089 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.150028 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.239122 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"auto-csr-approver-29566608-tdwn4\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.340664 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"auto-csr-approver-29566608-tdwn4\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.377031 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"auto-csr-approver-29566608-tdwn4\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.479893 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:00 crc kubenswrapper[5136]: I0320 08:48:00.957123 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:48:01 crc kubenswrapper[5136]: I0320 08:48:01.265126 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerStarted","Data":"46b0c2fb5f21a668a1cd3e8ba21fde944d41173fed7c941c91de2c944a6a6f73"} Mar 20 08:48:02 crc kubenswrapper[5136]: I0320 08:48:02.278026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerStarted","Data":"fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4"} Mar 20 08:48:02 crc kubenswrapper[5136]: I0320 08:48:02.295619 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" podStartSLOduration=1.264273977 podStartE2EDuration="2.295602167s" podCreationTimestamp="2026-03-20 08:48:00 +0000 UTC" firstStartedPulling="2026-03-20 08:48:00.968931013 +0000 UTC m=+7113.228242164" lastFinishedPulling="2026-03-20 08:48:02.000259183 +0000 UTC m=+7114.259570354" observedRunningTime="2026-03-20 08:48:02.292337475 +0000 UTC m=+7114.551648626" watchObservedRunningTime="2026-03-20 08:48:02.295602167 +0000 UTC m=+7114.554913318" Mar 20 08:48:03 crc kubenswrapper[5136]: I0320 08:48:03.298502 5136 generic.go:334] "Generic (PLEG): container finished" podID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerID="fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4" exitCode=0 Mar 20 08:48:03 crc kubenswrapper[5136]: I0320 08:48:03.298548 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerDied","Data":"fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4"} Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.633518 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.725166 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") pod \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\" (UID: \"ef82e0a5-a043-48d9-82d6-132dbf0e9b74\") " Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.733243 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc" (OuterVolumeSpecName: "kube-api-access-d9mdc") pod "ef82e0a5-a043-48d9-82d6-132dbf0e9b74" (UID: "ef82e0a5-a043-48d9-82d6-132dbf0e9b74"). InnerVolumeSpecName "kube-api-access-d9mdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:04 crc kubenswrapper[5136]: I0320 08:48:04.826381 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9mdc\" (UniqueName: \"kubernetes.io/projected/ef82e0a5-a043-48d9-82d6-132dbf0e9b74-kube-api-access-d9mdc\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.318588 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" event={"ID":"ef82e0a5-a043-48d9-82d6-132dbf0e9b74","Type":"ContainerDied","Data":"46b0c2fb5f21a668a1cd3e8ba21fde944d41173fed7c941c91de2c944a6a6f73"} Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.318626 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46b0c2fb5f21a668a1cd3e8ba21fde944d41173fed7c941c91de2c944a6a6f73" Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.318714 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566608-tdwn4" Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.371657 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:48:05 crc kubenswrapper[5136]: I0320 08:48:05.379129 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566602-fmfs9"] Mar 20 08:48:06 crc kubenswrapper[5136]: I0320 08:48:06.410436 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78e36980-52e2-4a59-9374-b2f1150fcb20" path="/var/lib/kubelet/pods/78e36980-52e2-4a59-9374-b2f1150fcb20/volumes" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.406649 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:48:08 crc kubenswrapper[5136]: E0320 08:48:08.406957 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.956963 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 08:48:08 crc kubenswrapper[5136]: E0320 08:48:08.957270 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerName="oc" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.957286 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerName="oc" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.957447 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" containerName="oc" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.958009 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961303 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-crpvv" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961430 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961508 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961778 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Mar 20 08:48:08 crc kubenswrapper[5136]: I0320 08:48:08.961948 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002356 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002425 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002454 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002768 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002855 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002900 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.002964 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.007552 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.038378 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.039600 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.063378 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104321 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104380 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104467 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104507 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104530 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104548 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104569 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104596 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104635 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104678 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104703 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.104807 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.105295 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.105711 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.124755 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.125587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.126255 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.128581 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"swift-ring-rebalance-lxzxf\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207165 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207230 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207261 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207290 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.207356 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.208299 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.208792 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.209528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.210167 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.229184 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"dnsmasq-dns-549cfd6bdc-gq4cl\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.290037 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.364777 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.764216 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 08:48:09 crc kubenswrapper[5136]: I0320 08:48:09.854280 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.388666 5136 generic.go:334] "Generic (PLEG): container finished" podID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerID="3fea7d5c06ec7715b7fc0e66f5644f5c5e237f2a08acb713c5d77dc706e25822" exitCode=0 Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.388979 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerDied","Data":"3fea7d5c06ec7715b7fc0e66f5644f5c5e237f2a08acb713c5d77dc706e25822"} Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.389004 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerStarted","Data":"8ec3eac30ebcf6dd9046afae810790acb77c285633e8d4470487949417fb3311"} Mar 20 08:48:10 crc kubenswrapper[5136]: I0320 08:48:10.422609 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerStarted","Data":"f85c3bbff6d70b386d31f6a53eb27b133a80a3ef6788031c00b163ca73516c9a"} Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.437693 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerStarted","Data":"fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d"} Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.438876 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.468592 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" podStartSLOduration=2.468573263 podStartE2EDuration="2.468573263s" podCreationTimestamp="2026-03-20 08:48:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:48:11.460150851 +0000 UTC m=+7123.719462002" watchObservedRunningTime="2026-03-20 08:48:11.468573263 +0000 UTC m=+7123.727884414" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.947556 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.953886 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.956534 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Mar 20 08:48:11 crc kubenswrapper[5136]: I0320 08:48:11.961297 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068118 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068294 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068330 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.068359 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170452 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170505 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170605 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170633 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170681 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.170717 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.171222 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.172527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.178075 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.193579 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.194528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.195053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"swift-proxy-64657f9cbd-qbdxx\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:12 crc kubenswrapper[5136]: I0320 08:48:12.281646 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.279004 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.280512 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.282132 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.282750 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.337052 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421944 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.421967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422025 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422059 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.422160 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.527837 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.527975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528008 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528056 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528099 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.528233 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.530910 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.541645 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.554995 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.559108 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.559944 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.562117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.571716 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.585840 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"swift-proxy-965f7d5f6-cshp2\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:13 crc kubenswrapper[5136]: I0320 08:48:13.725142 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.022893 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.376239 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 08:48:14 crc kubenswrapper[5136]: W0320 08:48:14.383589 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0007e89c_1f52_4ac8_beed_59d6db6e60fd.slice/crio-d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0 WatchSource:0}: Error finding container d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0: Status 404 returned error can't find the container with id d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0 Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504793 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerStarted","Data":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504876 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerStarted","Data":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504893 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerStarted","Data":"eb93409020e8530694a12115b4c16399fd5361531132f289b0cd4e21bb24892f"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504926 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.504944 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.510701 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerStarted","Data":"bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.514183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerStarted","Data":"d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0"} Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.540104 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-64657f9cbd-qbdxx" podStartSLOduration=3.5400825940000002 podStartE2EDuration="3.540082594s" podCreationTimestamp="2026-03-20 08:48:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:48:14.529431743 +0000 UTC m=+7126.788742904" watchObservedRunningTime="2026-03-20 08:48:14.540082594 +0000 UTC m=+7126.799393745" Mar 20 08:48:14 crc kubenswrapper[5136]: I0320 08:48:14.553700 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-lxzxf" podStartSLOduration=3.063422883 podStartE2EDuration="6.553681446s" podCreationTimestamp="2026-03-20 08:48:08 +0000 UTC" firstStartedPulling="2026-03-20 08:48:09.769112523 +0000 UTC m=+7122.028423674" lastFinishedPulling="2026-03-20 08:48:13.259371086 +0000 UTC m=+7125.518682237" observedRunningTime="2026-03-20 08:48:14.552374815 +0000 UTC m=+7126.811685986" watchObservedRunningTime="2026-03-20 08:48:14.553681446 +0000 UTC m=+7126.812992597" Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.548913 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerStarted","Data":"9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318"} Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.549347 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.549380 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.549390 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerStarted","Data":"5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447"} Mar 20 08:48:15 crc kubenswrapper[5136]: I0320 08:48:15.578476 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-965f7d5f6-cshp2" podStartSLOduration=2.5784521419999997 podStartE2EDuration="2.578452142s" podCreationTimestamp="2026-03-20 08:48:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:48:15.576313055 +0000 UTC m=+7127.835624226" watchObservedRunningTime="2026-03-20 08:48:15.578452142 +0000 UTC m=+7127.837763293" Mar 20 08:48:17 crc kubenswrapper[5136]: I0320 08:48:17.564987 5136 generic.go:334] "Generic (PLEG): container finished" podID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerID="bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60" exitCode=0 Mar 20 08:48:17 crc kubenswrapper[5136]: I0320 08:48:17.565052 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerDied","Data":"bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60"} Mar 20 08:48:18 crc kubenswrapper[5136]: I0320 08:48:18.914118 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018763 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018794 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018853 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.018961 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.019004 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.019051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") pod \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\" (UID: \"1513f332-b5c6-40ca-9c3a-4ef7b1f78672\") " Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.019497 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.020315 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.030168 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf" (OuterVolumeSpecName: "kube-api-access-4dbhf") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "kube-api-access-4dbhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.033571 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.045905 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.050189 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.053194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts" (OuterVolumeSpecName: "scripts") pod "1513f332-b5c6-40ca-9c3a-4ef7b1f78672" (UID: "1513f332-b5c6-40ca-9c3a-4ef7b1f78672"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121154 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121197 5136 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-swiftconf\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121211 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121222 5136 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-ring-data-devices\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121235 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121245 5136 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-dispersionconf\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.121257 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dbhf\" (UniqueName: \"kubernetes.io/projected/1513f332-b5c6-40ca-9c3a-4ef7b1f78672-kube-api-access-4dbhf\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.367003 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.396986 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.431481 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.431742 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" containerID="cri-o://85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2" gracePeriod=10 Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.585419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-lxzxf" event={"ID":"1513f332-b5c6-40ca-9c3a-4ef7b1f78672","Type":"ContainerDied","Data":"f85c3bbff6d70b386d31f6a53eb27b133a80a3ef6788031c00b163ca73516c9a"} Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.585460 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f85c3bbff6d70b386d31f6a53eb27b133a80a3ef6788031c00b163ca73516c9a" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.585534 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-lxzxf" Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.592730 5136 generic.go:334] "Generic (PLEG): container finished" podID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerID="85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2" exitCode=0 Mar 20 08:48:19 crc kubenswrapper[5136]: I0320 08:48:19.592779 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerDied","Data":"85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2"} Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.013187 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.148588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.148978 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.149038 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.149122 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.149188 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") pod \"4a740a83-3e08-402b-9e5b-6c8a62a80435\" (UID: \"4a740a83-3e08-402b-9e5b-6c8a62a80435\") " Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.156916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x" (OuterVolumeSpecName: "kube-api-access-nj86x") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "kube-api-access-nj86x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.197172 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.203044 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.203999 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config" (OuterVolumeSpecName: "config") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.231492 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a740a83-3e08-402b-9e5b-6c8a62a80435" (UID: "4a740a83-3e08-402b-9e5b-6c8a62a80435"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250542 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250569 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250581 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nj86x\" (UniqueName: \"kubernetes.io/projected/4a740a83-3e08-402b-9e5b-6c8a62a80435-kube-api-access-nj86x\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250590 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.250598 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a740a83-3e08-402b-9e5b-6c8a62a80435-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.602569 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62"} Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.605908 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" event={"ID":"4a740a83-3e08-402b-9e5b-6c8a62a80435","Type":"ContainerDied","Data":"676c8c5cdb1dfc8268622e52dd2300c796e32f8a976b6c157621d92d03db62f0"} Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.605974 5136 scope.go:117] "RemoveContainer" containerID="85a4536c0d633a089bcb60d3330e0a39a977db11277f10ac374ebe343ed461c2" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.606004 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6968b46cdc-n6kjz" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.630148 5136 scope.go:117] "RemoveContainer" containerID="8f73d6f969a6dcfad143a7dea5aee18ef87be55ad10ac352de6c2af3efe4415d" Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.664597 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:48:20 crc kubenswrapper[5136]: I0320 08:48:20.671535 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6968b46cdc-n6kjz"] Mar 20 08:48:22 crc kubenswrapper[5136]: I0320 08:48:22.284717 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:22 crc kubenswrapper[5136]: I0320 08:48:22.285090 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:22 crc kubenswrapper[5136]: I0320 08:48:22.408348 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" path="/var/lib/kubelet/pods/4a740a83-3e08-402b-9e5b-6c8a62a80435/volumes" Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.730010 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.730386 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.813284 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.815670 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64657f9cbd-qbdxx" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" containerID="cri-o://d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" gracePeriod=30 Mar 20 08:48:23 crc kubenswrapper[5136]: I0320 08:48:23.816123 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-64657f9cbd-qbdxx" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" containerID="cri-o://90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" gracePeriod=30 Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.508667 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628676 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628806 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628954 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.628981 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") pod \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\" (UID: \"5550afcf-085f-4f88-b901-dc9b4cf9fb7e\") " Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629862 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.629902 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.636691 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.652259 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w" (OuterVolumeSpecName: "kube-api-access-9w85w") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "kube-api-access-9w85w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.678914 5136 generic.go:334] "Generic (PLEG): container finished" podID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" exitCode=0 Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.678961 5136 generic.go:334] "Generic (PLEG): container finished" podID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" exitCode=0 Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680053 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-64657f9cbd-qbdxx" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680480 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerDied","Data":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680511 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerDied","Data":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680526 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-64657f9cbd-qbdxx" event={"ID":"5550afcf-085f-4f88-b901-dc9b4cf9fb7e","Type":"ContainerDied","Data":"eb93409020e8530694a12115b4c16399fd5361531132f289b0cd4e21bb24892f"} Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.680542 5136 scope.go:117] "RemoveContainer" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.683645 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data" (OuterVolumeSpecName: "config-data") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.687989 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5550afcf-085f-4f88-b901-dc9b4cf9fb7e" (UID: "5550afcf-085f-4f88-b901-dc9b4cf9fb7e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731727 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731778 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731788 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731798 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731809 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.731835 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w85w\" (UniqueName: \"kubernetes.io/projected/5550afcf-085f-4f88-b901-dc9b4cf9fb7e-kube-api-access-9w85w\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.760943 5136 scope.go:117] "RemoveContainer" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.826186 5136 scope.go:117] "RemoveContainer" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: E0320 08:48:24.827585 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": container with ID starting with 90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5 not found: ID does not exist" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.827635 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} err="failed to get container status \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": rpc error: code = NotFound desc = could not find container \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": container with ID starting with 90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5 not found: ID does not exist" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.827666 5136 scope.go:117] "RemoveContainer" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: E0320 08:48:24.828407 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": container with ID starting with d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4 not found: ID does not exist" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.828466 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} err="failed to get container status \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": rpc error: code = NotFound desc = could not find container \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": container with ID starting with d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4 not found: ID does not exist" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.828509 5136 scope.go:117] "RemoveContainer" containerID="90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.832357 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5"} err="failed to get container status \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": rpc error: code = NotFound desc = could not find container \"90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5\": container with ID starting with 90827efbbb8f96548345963e1efa7b470288e7723585d2061ce0778fc2c16ef5 not found: ID does not exist" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.832440 5136 scope.go:117] "RemoveContainer" containerID="d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4" Mar 20 08:48:24 crc kubenswrapper[5136]: I0320 08:48:24.832837 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4"} err="failed to get container status \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": rpc error: code = NotFound desc = could not find container \"d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4\": container with ID starting with d7c75a33cb29373293de8e9711821c0a50d3ba87ea2577498423b09604997fd4 not found: ID does not exist" Mar 20 08:48:25 crc kubenswrapper[5136]: I0320 08:48:25.021872 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:25 crc kubenswrapper[5136]: I0320 08:48:25.031076 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-64657f9cbd-qbdxx"] Mar 20 08:48:26 crc kubenswrapper[5136]: I0320 08:48:26.407952 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" path="/var/lib/kubelet/pods/5550afcf-085f-4f88-b901-dc9b4cf9fb7e/volumes" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.143264 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144156 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerName="swift-ring-rebalance" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144168 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerName="swift-ring-rebalance" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144182 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="init" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144189 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="init" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144199 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144206 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144218 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144224 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" Mar 20 08:48:30 crc kubenswrapper[5136]: E0320 08:48:30.144240 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144246 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144420 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-server" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144432 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a740a83-3e08-402b-9e5b-6c8a62a80435" containerName="dnsmasq-dns" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144444 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5550afcf-085f-4f88-b901-dc9b4cf9fb7e" containerName="proxy-httpd" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.144453 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" containerName="swift-ring-rebalance" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.145986 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.151395 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.153347 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.157288 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.170493 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.181272 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.225489 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.225558 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.225891 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.226056 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327546 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327585 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.327624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.328427 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.328465 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.346628 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"cinder-b8c9-account-create-update-8v2dt\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.346922 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"cinder-db-create-vgnkq\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.472159 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.483678 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.927923 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:48:30 crc kubenswrapper[5136]: W0320 08:48:30.936137 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63d0704b_80fd_44fe_9007_2971cc8a6cf6.slice/crio-b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830 WatchSource:0}: Error finding container b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830: Status 404 returned error can't find the container with id b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830 Mar 20 08:48:30 crc kubenswrapper[5136]: W0320 08:48:30.974167 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0863275_620b_4bea_a747_135c323ebb6f.slice/crio-20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1 WatchSource:0}: Error finding container 20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1: Status 404 returned error can't find the container with id 20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1 Mar 20 08:48:30 crc kubenswrapper[5136]: I0320 08:48:30.974750 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.735918 5136 generic.go:334] "Generic (PLEG): container finished" podID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerID="ba829091226a089834672fdb8aaa0264ffcab6218d4874fe20d15ed41e821de5" exitCode=0 Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.735978 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgnkq" event={"ID":"63d0704b-80fd-44fe-9007-2971cc8a6cf6","Type":"ContainerDied","Data":"ba829091226a089834672fdb8aaa0264ffcab6218d4874fe20d15ed41e821de5"} Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.736003 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgnkq" event={"ID":"63d0704b-80fd-44fe-9007-2971cc8a6cf6","Type":"ContainerStarted","Data":"b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830"} Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.739054 5136 generic.go:334] "Generic (PLEG): container finished" podID="f0863275-620b-4bea-a747-135c323ebb6f" containerID="6d85db0ede2cb37b721e22824a2dda96a152a59cfb86afea2b68c0eedbe79e58" exitCode=0 Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.739090 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-8v2dt" event={"ID":"f0863275-620b-4bea-a747-135c323ebb6f","Type":"ContainerDied","Data":"6d85db0ede2cb37b721e22824a2dda96a152a59cfb86afea2b68c0eedbe79e58"} Mar 20 08:48:31 crc kubenswrapper[5136]: I0320 08:48:31.739112 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-8v2dt" event={"ID":"f0863275-620b-4bea-a747-135c323ebb6f","Type":"ContainerStarted","Data":"20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1"} Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.161629 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.167576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292268 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") pod \"f0863275-620b-4bea-a747-135c323ebb6f\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292459 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") pod \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292518 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") pod \"f0863275-620b-4bea-a747-135c323ebb6f\" (UID: \"f0863275-620b-4bea-a747-135c323ebb6f\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.292555 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") pod \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\" (UID: \"63d0704b-80fd-44fe-9007-2971cc8a6cf6\") " Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.293063 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f0863275-620b-4bea-a747-135c323ebb6f" (UID: "f0863275-620b-4bea-a747-135c323ebb6f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.293171 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "63d0704b-80fd-44fe-9007-2971cc8a6cf6" (UID: "63d0704b-80fd-44fe-9007-2971cc8a6cf6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.298348 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk" (OuterVolumeSpecName: "kube-api-access-79jgk") pod "f0863275-620b-4bea-a747-135c323ebb6f" (UID: "f0863275-620b-4bea-a747-135c323ebb6f"). InnerVolumeSpecName "kube-api-access-79jgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.298390 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4" (OuterVolumeSpecName: "kube-api-access-nsxz4") pod "63d0704b-80fd-44fe-9007-2971cc8a6cf6" (UID: "63d0704b-80fd-44fe-9007-2971cc8a6cf6"). InnerVolumeSpecName "kube-api-access-nsxz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394864 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/63d0704b-80fd-44fe-9007-2971cc8a6cf6-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394896 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79jgk\" (UniqueName: \"kubernetes.io/projected/f0863275-620b-4bea-a747-135c323ebb6f-kube-api-access-79jgk\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394915 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsxz4\" (UniqueName: \"kubernetes.io/projected/63d0704b-80fd-44fe-9007-2971cc8a6cf6-kube-api-access-nsxz4\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.394934 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f0863275-620b-4bea-a747-135c323ebb6f-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.763925 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-vgnkq" event={"ID":"63d0704b-80fd-44fe-9007-2971cc8a6cf6","Type":"ContainerDied","Data":"b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830"} Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.763971 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f74e241023d6af0778a504810ccff18f4409a6f8b0d6e50c4f71d091e05830" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.764001 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-vgnkq" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.766057 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-8v2dt" event={"ID":"f0863275-620b-4bea-a747-135c323ebb6f","Type":"ContainerDied","Data":"20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1"} Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.766103 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20921273cacffba7ca344518246963a37057b993a41d894e140137b5f3a100c1" Mar 20 08:48:33 crc kubenswrapper[5136]: I0320 08:48:33.766114 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-8v2dt" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.385863 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:48:35 crc kubenswrapper[5136]: E0320 08:48:35.386190 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerName="mariadb-database-create" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386202 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerName="mariadb-database-create" Mar 20 08:48:35 crc kubenswrapper[5136]: E0320 08:48:35.386221 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0863275-620b-4bea-a747-135c323ebb6f" containerName="mariadb-account-create-update" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386228 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0863275-620b-4bea-a747-135c323ebb6f" containerName="mariadb-account-create-update" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386379 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0863275-620b-4bea-a747-135c323ebb6f" containerName="mariadb-account-create-update" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386405 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" containerName="mariadb-database-create" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.386923 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.388752 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.389381 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bjz62" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.392454 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.426024 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.429914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.429987 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430048 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430118 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.430251 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531775 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531886 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531909 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531954 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.531981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.532007 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.532103 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.536489 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.537626 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.538421 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.539710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.551394 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"cinder-db-sync-mwt5p\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:35 crc kubenswrapper[5136]: I0320 08:48:35.725366 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:36 crc kubenswrapper[5136]: I0320 08:48:36.162141 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:48:36 crc kubenswrapper[5136]: W0320 08:48:36.163986 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695202be_4633_411e_9afe_fd706e1cfbe6.slice/crio-2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0 WatchSource:0}: Error finding container 2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0: Status 404 returned error can't find the container with id 2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0 Mar 20 08:48:36 crc kubenswrapper[5136]: I0320 08:48:36.791701 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerStarted","Data":"2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0"} Mar 20 08:48:38 crc kubenswrapper[5136]: I0320 08:48:38.748649 5136 scope.go:117] "RemoveContainer" containerID="4d16220fc9b1db88fdb1fbb167050afb3f65c942a2a02caf4ba1ec80a2858ccc" Mar 20 08:48:55 crc kubenswrapper[5136]: I0320 08:48:55.964962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerStarted","Data":"44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19"} Mar 20 08:48:57 crc kubenswrapper[5136]: I0320 08:48:57.980580 5136 generic.go:334] "Generic (PLEG): container finished" podID="695202be-4633-411e-9afe-fd706e1cfbe6" containerID="44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19" exitCode=0 Mar 20 08:48:57 crc kubenswrapper[5136]: I0320 08:48:57.980698 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerDied","Data":"44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19"} Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.309273 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374085 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374186 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374225 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374263 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.374435 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") pod \"695202be-4633-411e-9afe-fd706e1cfbe6\" (UID: \"695202be-4633-411e-9afe-fd706e1cfbe6\") " Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.375402 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.380399 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts" (OuterVolumeSpecName: "scripts") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.382008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6" (OuterVolumeSpecName: "kube-api-access-88hl6") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "kube-api-access-88hl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.394058 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.429403 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.443538 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data" (OuterVolumeSpecName: "config-data") pod "695202be-4633-411e-9afe-fd706e1cfbe6" (UID: "695202be-4633-411e-9afe-fd706e1cfbe6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.476327 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/695202be-4633-411e-9afe-fd706e1cfbe6-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.477748 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88hl6\" (UniqueName: \"kubernetes.io/projected/695202be-4633-411e-9afe-fd706e1cfbe6-kube-api-access-88hl6\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.477898 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.478009 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.478119 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:48:59 crc kubenswrapper[5136]: I0320 08:48:59.478221 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/695202be-4633-411e-9afe-fd706e1cfbe6-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.004950 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-mwt5p" event={"ID":"695202be-4633-411e-9afe-fd706e1cfbe6","Type":"ContainerDied","Data":"2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0"} Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.005006 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2014bb0924c9661fe4fb527d0db122e215a97354ccd2169fbd6f49626faf74d0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.005028 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-mwt5p" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.321857 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:49:00 crc kubenswrapper[5136]: E0320 08:49:00.322502 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" containerName="cinder-db-sync" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.322517 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" containerName="cinder-db-sync" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.322671 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" containerName="cinder-db-sync" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.323499 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.339939 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394307 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394397 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394423 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394470 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.394503 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.457135 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.458507 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.460173 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.460237 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.465020 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.468776 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bjz62" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.478715 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.496282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.497195 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.497515 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.497549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498466 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498535 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498650 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.498921 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.499528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.518832 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"dnsmasq-dns-78db57ffd5-mzbfx\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600386 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600476 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600516 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600557 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600601 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.600671 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.641200 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702579 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702614 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702659 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702700 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702744 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702839 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.702866 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.703558 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.703782 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.707056 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.707715 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.708420 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.715911 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.724550 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"cinder-api-0\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " pod="openstack/cinder-api-0" Mar 20 08:49:00 crc kubenswrapper[5136]: I0320 08:49:00.772024 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:01 crc kubenswrapper[5136]: I0320 08:49:01.163119 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:49:01 crc kubenswrapper[5136]: I0320 08:49:01.253013 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.023200 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerID="3ff6f40c02029e2b21fb76159d8a4a46d3d5ada3e12371991cb9ff0c2549f74e" exitCode=0 Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.023468 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerDied","Data":"3ff6f40c02029e2b21fb76159d8a4a46d3d5ada3e12371991cb9ff0c2549f74e"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.023547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerStarted","Data":"1060eb1db927e589f382cb4a2cb4756b677bac6f172c644881bb0448e3071e35"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.029128 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerStarted","Data":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.029176 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerStarted","Data":"7f6b9df02974b1bde727c6a4b1dca2701c65ea0d314610d44c06bf87698b157b"} Mar 20 08:49:02 crc kubenswrapper[5136]: I0320 08:49:02.367757 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037495 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerStarted","Data":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037862 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037638 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" containerID="cri-o://a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" gracePeriod=30 Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.037590 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" containerID="cri-o://cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" gracePeriod=30 Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.040640 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerStarted","Data":"70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5"} Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.041416 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.059006 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.058987146 podStartE2EDuration="3.058987146s" podCreationTimestamp="2026-03-20 08:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:03.058557502 +0000 UTC m=+7175.317868663" watchObservedRunningTime="2026-03-20 08:49:03.058987146 +0000 UTC m=+7175.318298297" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.081208 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" podStartSLOduration=3.081189515 podStartE2EDuration="3.081189515s" podCreationTimestamp="2026-03-20 08:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:03.07460245 +0000 UTC m=+7175.333913621" watchObservedRunningTime="2026-03-20 08:49:03.081189515 +0000 UTC m=+7175.340500666" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.666742 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.757763 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758192 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.757912 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758353 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758446 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") pod \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\" (UID: \"fe8a3d90-f1a7-46e4-9ab9-c6332b728809\") " Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758764 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs" (OuterVolumeSpecName: "logs") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758948 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.758972 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.764239 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.765152 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts" (OuterVolumeSpecName: "scripts") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.767069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s" (OuterVolumeSpecName: "kube-api-access-fpz9s") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "kube-api-access-fpz9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.783690 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.805332 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data" (OuterVolumeSpecName: "config-data") pod "fe8a3d90-f1a7-46e4-9ab9-c6332b728809" (UID: "fe8a3d90-f1a7-46e4-9ab9-c6332b728809"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860180 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860216 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860227 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860236 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpz9s\" (UniqueName: \"kubernetes.io/projected/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-kube-api-access-fpz9s\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:03 crc kubenswrapper[5136]: I0320 08:49:03.860245 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe8a3d90-f1a7-46e4-9ab9-c6332b728809-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051417 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" exitCode=0 Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051448 5136 generic.go:334] "Generic (PLEG): container finished" podID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" exitCode=143 Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051476 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerDied","Data":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051577 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerDied","Data":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fe8a3d90-f1a7-46e4-9ab9-c6332b728809","Type":"ContainerDied","Data":"7f6b9df02974b1bde727c6a4b1dca2701c65ea0d314610d44c06bf87698b157b"} Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.051645 5136 scope.go:117] "RemoveContainer" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.071226 5136 scope.go:117] "RemoveContainer" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.086666 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.090233 5136 scope.go:117] "RemoveContainer" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.090700 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": container with ID starting with a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1 not found: ID does not exist" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.090760 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} err="failed to get container status \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": rpc error: code = NotFound desc = could not find container \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": container with ID starting with a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1 not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.090779 5136 scope.go:117] "RemoveContainer" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.093235 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": container with ID starting with cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db not found: ID does not exist" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093291 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} err="failed to get container status \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": rpc error: code = NotFound desc = could not find container \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": container with ID starting with cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093308 5136 scope.go:117] "RemoveContainer" containerID="a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093548 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1"} err="failed to get container status \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": rpc error: code = NotFound desc = could not find container \"a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1\": container with ID starting with a80a48b12c2e5f37ef96b72423f291a4682cfef897cb29fb7c0c28a4245d1ca1 not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.093565 5136 scope.go:117] "RemoveContainer" containerID="cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.097135 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db"} err="failed to get container status \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": rpc error: code = NotFound desc = could not find container \"cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db\": container with ID starting with cb9318aa384e2d12db947ce57f4d92cc4996ceb879679a4a7931edbf27ffa2db not found: ID does not exist" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.100908 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.110794 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.111197 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111215 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" Mar 20 08:49:04 crc kubenswrapper[5136]: E0320 08:49:04.111236 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111245 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111395 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api-log" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.111414 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" containerName="cinder-api" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.112274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.115865 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.117669 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119212 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119459 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119650 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-bjz62" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.119961 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.121908 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165042 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165128 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165159 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165179 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165323 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165360 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165472 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165621 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.165709 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267402 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267434 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267472 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267502 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267532 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267565 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267600 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267650 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.267979 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.268308 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.271492 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.271806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.271975 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.272264 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.272512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.273261 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.282417 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"cinder-api-0\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.406597 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe8a3d90-f1a7-46e4-9ab9-c6332b728809" path="/var/lib/kubelet/pods/fe8a3d90-f1a7-46e4-9ab9-c6332b728809/volumes" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.474616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:04 crc kubenswrapper[5136]: I0320 08:49:04.897757 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:04 crc kubenswrapper[5136]: W0320 08:49:04.904479 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod899ba9fb_f6d6_4063_9489_482bdf8cb9c4.slice/crio-ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1 WatchSource:0}: Error finding container ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1: Status 404 returned error can't find the container with id ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1 Mar 20 08:49:05 crc kubenswrapper[5136]: I0320 08:49:05.062535 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerStarted","Data":"ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1"} Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.084282 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerStarted","Data":"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994"} Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.084719 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerStarted","Data":"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8"} Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.085724 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 08:49:06 crc kubenswrapper[5136]: I0320 08:49:06.125469 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=2.12545163 podStartE2EDuration="2.12545163s" podCreationTimestamp="2026-03-20 08:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:06.107409011 +0000 UTC m=+7178.366720182" watchObservedRunningTime="2026-03-20 08:49:06.12545163 +0000 UTC m=+7178.384762781" Mar 20 08:49:10 crc kubenswrapper[5136]: I0320 08:49:10.642968 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:49:10 crc kubenswrapper[5136]: I0320 08:49:10.743473 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:49:10 crc kubenswrapper[5136]: I0320 08:49:10.743764 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" containerID="cri-o://fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d" gracePeriod=10 Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.137923 5136 generic.go:334] "Generic (PLEG): container finished" podID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerID="fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d" exitCode=0 Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.138454 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerDied","Data":"fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d"} Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.308119 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.504356 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.504864 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.505026 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.505249 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.505443 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") pod \"cad94756-feb3-42e4-8c87-b0cfb638edba\" (UID: \"cad94756-feb3-42e4-8c87-b0cfb638edba\") " Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.511727 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db" (OuterVolumeSpecName: "kube-api-access-xf7db") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "kube-api-access-xf7db". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.543547 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.559099 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.564329 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.578462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config" (OuterVolumeSpecName: "config") pod "cad94756-feb3-42e4-8c87-b0cfb638edba" (UID: "cad94756-feb3-42e4-8c87-b0cfb638edba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607206 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf7db\" (UniqueName: \"kubernetes.io/projected/cad94756-feb3-42e4-8c87-b0cfb638edba-kube-api-access-xf7db\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607248 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607260 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607272 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:11 crc kubenswrapper[5136]: I0320 08:49:11.607282 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cad94756-feb3-42e4-8c87-b0cfb638edba-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.148033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" event={"ID":"cad94756-feb3-42e4-8c87-b0cfb638edba","Type":"ContainerDied","Data":"8ec3eac30ebcf6dd9046afae810790acb77c285633e8d4470487949417fb3311"} Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.148318 5136 scope.go:117] "RemoveContainer" containerID="fde2ab1fdb1e8fbccab29a1af1b697570f2345e5369a2aaab8edc51ff2fc185d" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.148133 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549cfd6bdc-gq4cl" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.171023 5136 scope.go:117] "RemoveContainer" containerID="3fea7d5c06ec7715b7fc0e66f5644f5c5e237f2a08acb713c5d77dc706e25822" Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.189605 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.195674 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549cfd6bdc-gq4cl"] Mar 20 08:49:12 crc kubenswrapper[5136]: I0320 08:49:12.410292 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" path="/var/lib/kubelet/pods/cad94756-feb3-42e4-8c87-b0cfb638edba/volumes" Mar 20 08:49:16 crc kubenswrapper[5136]: I0320 08:49:16.298404 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.060259 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:36 crc kubenswrapper[5136]: E0320 08:49:36.061008 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.061021 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" Mar 20 08:49:36 crc kubenswrapper[5136]: E0320 08:49:36.061034 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="init" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.061041 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="init" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.061177 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cad94756-feb3-42e4-8c87-b0cfb638edba" containerName="dnsmasq-dns" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.062273 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.080606 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.214880 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.214978 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.215051 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.316975 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317107 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.317562 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.340776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"certified-operators-87jpl\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.396711 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:36 crc kubenswrapper[5136]: I0320 08:49:36.935956 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.183051 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.184701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.188059 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.200754 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.350722 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351111 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351165 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351208 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.351301 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.368403 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" exitCode=0 Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.368444 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491"} Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.368747 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerStarted","Data":"ad07a053e6003535fae2be960f8f7c86548891e11553064fa9be370310707a5f"} Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.370461 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452702 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452803 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452863 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452927 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.452998 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.458653 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.458692 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.460413 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.461681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.470382 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"cinder-scheduler-0\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.503218 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:37 crc kubenswrapper[5136]: I0320 08:49:37.939996 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.390370 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerStarted","Data":"643f48d1ba3dddd8e33500673a9d69f560010b92aaf976b899d98ec860bc8aaf"} Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.427007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerStarted","Data":"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5"} Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.834923 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.835523 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" containerID="cri-o://2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" gracePeriod=30 Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.836028 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" containerID="cri-o://1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" gracePeriod=30 Mar 20 08:49:38 crc kubenswrapper[5136]: I0320 08:49:38.869104 5136 scope.go:117] "RemoveContainer" containerID="18976fde7b0e8720c3912ec558d2b411507101e156ae43c4e540472db0f27db1" Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.410743 5136 generic.go:334] "Generic (PLEG): container finished" podID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" exitCode=143 Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.411074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerDied","Data":"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.413851 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" exitCode=0 Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.413895 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.413990 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerStarted","Data":"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.426872 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerStarted","Data":"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad"} Mar 20 08:49:39 crc kubenswrapper[5136]: I0320 08:49:39.445258 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-87jpl" podStartSLOduration=1.931598508 podStartE2EDuration="3.445240933s" podCreationTimestamp="2026-03-20 08:49:36 +0000 UTC" firstStartedPulling="2026-03-20 08:49:37.370147057 +0000 UTC m=+7209.629458208" lastFinishedPulling="2026-03-20 08:49:38.883789482 +0000 UTC m=+7211.143100633" observedRunningTime="2026-03-20 08:49:39.436023606 +0000 UTC m=+7211.695334757" watchObservedRunningTime="2026-03-20 08:49:39.445240933 +0000 UTC m=+7211.704552084" Mar 20 08:49:40 crc kubenswrapper[5136]: I0320 08:49:40.435786 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerStarted","Data":"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065"} Mar 20 08:49:40 crc kubenswrapper[5136]: I0320 08:49:40.459825 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.222389145 podStartE2EDuration="3.459791161s" podCreationTimestamp="2026-03-20 08:49:37 +0000 UTC" firstStartedPulling="2026-03-20 08:49:37.955777518 +0000 UTC m=+7210.215088669" lastFinishedPulling="2026-03-20 08:49:38.193179514 +0000 UTC m=+7210.452490685" observedRunningTime="2026-03-20 08:49:40.453620549 +0000 UTC m=+7212.712931690" watchObservedRunningTime="2026-03-20 08:49:40.459791161 +0000 UTC m=+7212.719102312" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.025933 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.101:8776/healthcheck\": read tcp 10.217.0.2:52180->10.217.1.101:8776: read: connection reset by peer" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.408182 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.456502 5136 generic.go:334] "Generic (PLEG): container finished" podID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" exitCode=0 Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.456942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerDied","Data":"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994"} Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.457020 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"899ba9fb-f6d6-4063-9489-482bdf8cb9c4","Type":"ContainerDied","Data":"ae6df973bb937e0388d9cc5aabb53d0a185dae68f0fbc7da3126af71130384e1"} Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.457045 5136 scope.go:117] "RemoveContainer" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.457359 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.480436 5136 scope.go:117] "RemoveContainer" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500192 5136 scope.go:117] "RemoveContainer" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.500563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994\": container with ID starting with 1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994 not found: ID does not exist" containerID="1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500611 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994"} err="failed to get container status \"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994\": rpc error: code = NotFound desc = could not find container \"1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994\": container with ID starting with 1b8e5998fd2173c4ee05720b713d5eb8d70b46f67351c6e4ec026a3f53f1d994 not found: ID does not exist" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500632 5136 scope.go:117] "RemoveContainer" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.500866 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8\": container with ID starting with 2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8 not found: ID does not exist" containerID="2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.500904 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8"} err="failed to get container status \"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8\": rpc error: code = NotFound desc = could not find container \"2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8\": container with ID starting with 2a6d3f8d3e769d0229818904e99d9a95f82212e9306eefc1f4b8a5cf64ce6cf8 not found: ID does not exist" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.504619 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541745 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541853 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.541978 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542020 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542071 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542201 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542245 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542319 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") pod \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\" (UID: \"899ba9fb-f6d6-4063-9489-482bdf8cb9c4\") " Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.542701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs" (OuterVolumeSpecName: "logs") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.547797 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.547851 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.548057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts" (OuterVolumeSpecName: "scripts") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.549405 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6" (OuterVolumeSpecName: "kube-api-access-s4rq6") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "kube-api-access-s4rq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.563970 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.564440 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.596917 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data" (OuterVolumeSpecName: "config-data") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.601425 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.617033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "899ba9fb-f6d6-4063-9489-482bdf8cb9c4" (UID: "899ba9fb-f6d6-4063-9489-482bdf8cb9c4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649316 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rq6\" (UniqueName: \"kubernetes.io/projected/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-kube-api-access-s4rq6\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649346 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649356 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649363 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649372 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649379 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.649389 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/899ba9fb-f6d6-4063-9489-482bdf8cb9c4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.790320 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.798511 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.809365 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.809754 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.809775 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" Mar 20 08:49:42 crc kubenswrapper[5136]: E0320 08:49:42.809826 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.809835 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.810016 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api-log" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.810040 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" containerName="cinder-api" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.811162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.820672 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.820674 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.820717 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.825696 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953338 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953420 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953471 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953493 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953525 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:42 crc kubenswrapper[5136]: I0320 08:49:42.953605 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055604 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055664 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055743 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055805 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055848 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055883 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.055968 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.056006 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.056314 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.059594 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.060053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.060053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.061058 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.061150 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.061146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.073069 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"cinder-api-0\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " pod="openstack/cinder-api-0" Mar 20 08:49:43 crc kubenswrapper[5136]: I0320 08:49:43.134625 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 08:49:44 crc kubenswrapper[5136]: I0320 08:49:44.179507 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Mar 20 08:49:44 crc kubenswrapper[5136]: I0320 08:49:44.409638 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899ba9fb-f6d6-4063-9489-482bdf8cb9c4" path="/var/lib/kubelet/pods/899ba9fb-f6d6-4063-9489-482bdf8cb9c4/volumes" Mar 20 08:49:44 crc kubenswrapper[5136]: I0320 08:49:44.476347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerStarted","Data":"e5d5c7c5c5992aa7583b39735a8b9b809168a5e579da62b57e92455fa830342d"} Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.486308 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerStarted","Data":"23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69"} Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.486619 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerStarted","Data":"99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f"} Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.486780 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Mar 20 08:49:45 crc kubenswrapper[5136]: I0320 08:49:45.512362 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.512339139 podStartE2EDuration="3.512339139s" podCreationTimestamp="2026-03-20 08:49:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:45.506473588 +0000 UTC m=+7217.765784739" watchObservedRunningTime="2026-03-20 08:49:45.512339139 +0000 UTC m=+7217.771650290" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.414157 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.414649 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.441249 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.537984 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:46 crc kubenswrapper[5136]: I0320 08:49:46.676706 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:47 crc kubenswrapper[5136]: I0320 08:49:47.711675 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 08:49:47 crc kubenswrapper[5136]: I0320 08:49:47.779945 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.511074 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-87jpl" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" containerID="cri-o://a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" gracePeriod=2 Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.511218 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" containerID="cri-o://e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" gracePeriod=30 Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.511167 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" containerID="cri-o://a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" gracePeriod=30 Mar 20 08:49:48 crc kubenswrapper[5136]: I0320 08:49:48.972461 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.061250 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") pod \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.061442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") pod \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.061594 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") pod \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\" (UID: \"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.064088 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities" (OuterVolumeSpecName: "utilities") pod "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" (UID: "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.067668 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n" (OuterVolumeSpecName: "kube-api-access-msd9n") pod "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" (UID: "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b"). InnerVolumeSpecName "kube-api-access-msd9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.124319 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" (UID: "2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.163847 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.163874 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.163885 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msd9n\" (UniqueName: \"kubernetes.io/projected/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b-kube-api-access-msd9n\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521470 5136 generic.go:334] "Generic (PLEG): container finished" podID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" exitCode=0 Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957"} Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-87jpl" event={"ID":"2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b","Type":"ContainerDied","Data":"ad07a053e6003535fae2be960f8f7c86548891e11553064fa9be370310707a5f"} Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521878 5136 scope.go:117] "RemoveContainer" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.521720 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-87jpl" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.524900 5136 generic.go:334] "Generic (PLEG): container finished" podID="302b747b-13f8-4339-b6bd-843625626b48" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" exitCode=0 Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.524941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerDied","Data":"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065"} Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.561311 5136 scope.go:117] "RemoveContainer" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.570611 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.579247 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-87jpl"] Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.584997 5136 scope.go:117] "RemoveContainer" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.609861 5136 scope.go:117] "RemoveContainer" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" Mar 20 08:49:49 crc kubenswrapper[5136]: E0320 08:49:49.610285 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957\": container with ID starting with a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957 not found: ID does not exist" containerID="a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610332 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957"} err="failed to get container status \"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957\": rpc error: code = NotFound desc = could not find container \"a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957\": container with ID starting with a7dc35d4b52fdc0db747a245d1327585f84494993833b6c58ca0f9b815ef0957 not found: ID does not exist" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610363 5136 scope.go:117] "RemoveContainer" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" Mar 20 08:49:49 crc kubenswrapper[5136]: E0320 08:49:49.610623 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5\": container with ID starting with b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5 not found: ID does not exist" containerID="b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610659 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5"} err="failed to get container status \"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5\": rpc error: code = NotFound desc = could not find container \"b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5\": container with ID starting with b17de963e101c6874fb2c5ee1a5a69fd6a56c707ffdb164ed9d609af37fd86d5 not found: ID does not exist" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610678 5136 scope.go:117] "RemoveContainer" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" Mar 20 08:49:49 crc kubenswrapper[5136]: E0320 08:49:49.610886 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491\": container with ID starting with 97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491 not found: ID does not exist" containerID="97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.610921 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491"} err="failed to get container status \"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491\": rpc error: code = NotFound desc = could not find container \"97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491\": container with ID starting with 97c04627c5162fae222f07f0ee9a179c10210b2be0497e1c98bb5aaa7eaef491 not found: ID does not exist" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.854541 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978268 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978379 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978486 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978587 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.978634 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") pod \"302b747b-13f8-4339-b6bd-843625626b48\" (UID: \"302b747b-13f8-4339-b6bd-843625626b48\") " Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.980408 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.984933 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts" (OuterVolumeSpecName: "scripts") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.984980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx" (OuterVolumeSpecName: "kube-api-access-97lqx") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "kube-api-access-97lqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:49:49 crc kubenswrapper[5136]: I0320 08:49:49.986181 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.031412 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.072332 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data" (OuterVolumeSpecName: "config-data") pod "302b747b-13f8-4339-b6bd-843625626b48" (UID: "302b747b-13f8-4339-b6bd-843625626b48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080854 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/302b747b-13f8-4339-b6bd-843625626b48-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080883 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080893 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97lqx\" (UniqueName: \"kubernetes.io/projected/302b747b-13f8-4339-b6bd-843625626b48-kube-api-access-97lqx\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080905 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080913 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.080922 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/302b747b-13f8-4339-b6bd-843625626b48-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.404839 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" path="/var/lib/kubelet/pods/2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b/volumes" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.546895 5136 generic.go:334] "Generic (PLEG): container finished" podID="302b747b-13f8-4339-b6bd-843625626b48" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" exitCode=0 Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerDied","Data":"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad"} Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547535 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"302b747b-13f8-4339-b6bd-843625626b48","Type":"ContainerDied","Data":"643f48d1ba3dddd8e33500673a9d69f560010b92aaf976b899d98ec860bc8aaf"} Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.547736 5136 scope.go:117] "RemoveContainer" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.573012 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.580199 5136 scope.go:117] "RemoveContainer" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.586729 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.596906 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597364 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597383 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597396 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-content" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597403 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-content" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597418 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597424 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597433 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597439 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.597459 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-utilities" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597465 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="extract-utilities" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597663 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="cinder-scheduler" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597698 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e15dc0b-b8b1-4e92-91ad-7c90a57e9c5b" containerName="registry-server" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.597712 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="302b747b-13f8-4339-b6bd-843625626b48" containerName="probe" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.598781 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.608299 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.611443 5136 scope.go:117] "RemoveContainer" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.612078 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065\": container with ID starting with e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065 not found: ID does not exist" containerID="e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.612122 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065"} err="failed to get container status \"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065\": rpc error: code = NotFound desc = could not find container \"e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065\": container with ID starting with e3652397a41934cc4c80677868b169b45e11151398519b46e04a81318a459065 not found: ID does not exist" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.612180 5136 scope.go:117] "RemoveContainer" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" Mar 20 08:49:50 crc kubenswrapper[5136]: E0320 08:49:50.612472 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad\": container with ID starting with a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad not found: ID does not exist" containerID="a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.612492 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad"} err="failed to get container status \"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad\": rpc error: code = NotFound desc = could not find container \"a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad\": container with ID starting with a914a47210f591190b25bb98821bfdc9e7434c42f8f0ed8df670cb76b0b18fad not found: ID does not exist" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.619708 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.694727 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.694845 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.694878 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.695043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.695091 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.695372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.796895 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.796988 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797029 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797110 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797151 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.797221 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.798778 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.800891 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.802518 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.803162 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.806214 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.824665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"cinder-scheduler-0\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " pod="openstack/cinder-scheduler-0" Mar 20 08:49:50 crc kubenswrapper[5136]: I0320 08:49:50.967445 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 08:49:51 crc kubenswrapper[5136]: I0320 08:49:51.462683 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 08:49:51 crc kubenswrapper[5136]: W0320 08:49:51.469475 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7be786a7_1dee_4cfb_bada_4883a9326c71.slice/crio-902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197 WatchSource:0}: Error finding container 902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197: Status 404 returned error can't find the container with id 902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197 Mar 20 08:49:51 crc kubenswrapper[5136]: I0320 08:49:51.565653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerStarted","Data":"902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197"} Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.408174 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="302b747b-13f8-4339-b6bd-843625626b48" path="/var/lib/kubelet/pods/302b747b-13f8-4339-b6bd-843625626b48/volumes" Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.576135 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerStarted","Data":"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a"} Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.576183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerStarted","Data":"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77"} Mar 20 08:49:52 crc kubenswrapper[5136]: I0320 08:49:52.593996 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=2.593979385 podStartE2EDuration="2.593979385s" podCreationTimestamp="2026-03-20 08:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:49:52.591040304 +0000 UTC m=+7224.850351455" watchObservedRunningTime="2026-03-20 08:49:52.593979385 +0000 UTC m=+7224.853290536" Mar 20 08:49:54 crc kubenswrapper[5136]: I0320 08:49:54.927727 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Mar 20 08:49:55 crc kubenswrapper[5136]: I0320 08:49:55.967569 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.147788 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.149747 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.153371 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.153693 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.161541 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.165246 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.290403 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"auto-csr-approver-29566610-hrt5r\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.392432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"auto-csr-approver-29566610-hrt5r\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.427482 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"auto-csr-approver-29566610-hrt5r\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:00 crc kubenswrapper[5136]: I0320 08:50:00.475413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:01 crc kubenswrapper[5136]: I0320 08:50:01.015251 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:50:01 crc kubenswrapper[5136]: W0320 08:50:01.029366 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode312a5ea_3b15_4c57_8b2d_613840a5d9ca.slice/crio-e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1 WatchSource:0}: Error finding container e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1: Status 404 returned error can't find the container with id e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1 Mar 20 08:50:01 crc kubenswrapper[5136]: I0320 08:50:01.177670 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Mar 20 08:50:01 crc kubenswrapper[5136]: I0320 08:50:01.668845 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" event={"ID":"e312a5ea-3b15-4c57-8b2d-613840a5d9ca","Type":"ContainerStarted","Data":"e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1"} Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.688466 5136 generic.go:334] "Generic (PLEG): container finished" podID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerID="482a97e7c7d9733c356a59b74d29e1b51c08c0378829f0707d6918c34c51d893" exitCode=0 Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.688567 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" event={"ID":"e312a5ea-3b15-4c57-8b2d-613840a5d9ca","Type":"ContainerDied","Data":"482a97e7c7d9733c356a59b74d29e1b51c08c0378829f0707d6918c34c51d893"} Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.958292 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.959597 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:03 crc kubenswrapper[5136]: I0320 08:50:03.971728 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.063890 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.063957 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.065700 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.066780 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.068603 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.074291 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.165981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166076 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166103 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.166941 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.185073 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"glance-db-create-tr2s5\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.267981 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.268316 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.268874 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.275347 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.285431 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"glance-6249-account-create-update-mrh6x\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.392561 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.714830 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 08:50:04 crc kubenswrapper[5136]: W0320 08:50:04.715999 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod570ecd59_555d_4f55_aed1_6fe547da30b1.slice/crio-b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db WatchSource:0}: Error finding container b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db: Status 404 returned error can't find the container with id b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.912713 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 08:50:04 crc kubenswrapper[5136]: W0320 08:50:04.918245 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd07221a_a5f4_4a47_a7bf_354b0d432b27.slice/crio-627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b WatchSource:0}: Error finding container 627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b: Status 404 returned error can't find the container with id 627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b Mar 20 08:50:04 crc kubenswrapper[5136]: I0320 08:50:04.992419 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.081965 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") pod \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\" (UID: \"e312a5ea-3b15-4c57-8b2d-613840a5d9ca\") " Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.086636 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh" (OuterVolumeSpecName: "kube-api-access-qmjmh") pod "e312a5ea-3b15-4c57-8b2d-613840a5d9ca" (UID: "e312a5ea-3b15-4c57-8b2d-613840a5d9ca"). InnerVolumeSpecName "kube-api-access-qmjmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.184637 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmjmh\" (UniqueName: \"kubernetes.io/projected/e312a5ea-3b15-4c57-8b2d-613840a5d9ca-kube-api-access-qmjmh\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.713161 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" event={"ID":"e312a5ea-3b15-4c57-8b2d-613840a5d9ca","Type":"ContainerDied","Data":"e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.713185 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566610-hrt5r" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.713199 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e415c5c14a320c68c7dd32ab0a9e072ba14efb06ed154ac2b195ddfda7b2e6c1" Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.718417 5136 generic.go:334] "Generic (PLEG): container finished" podID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerID="39d097d4e3a8458b775ea906bb0dd550fdd83b3369518a3cd12d9c26c24a8a02" exitCode=0 Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.718788 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tr2s5" event={"ID":"570ecd59-555d-4f55-aed1-6fe547da30b1","Type":"ContainerDied","Data":"39d097d4e3a8458b775ea906bb0dd550fdd83b3369518a3cd12d9c26c24a8a02"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.718832 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tr2s5" event={"ID":"570ecd59-555d-4f55-aed1-6fe547da30b1","Type":"ContainerStarted","Data":"b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.720576 5136 generic.go:334] "Generic (PLEG): container finished" podID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerID="62b91ae766226b0da7fe114136196e5dea194bad90be0b48d6f9d8c6e4102b25" exitCode=0 Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.720598 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mrh6x" event={"ID":"fd07221a-a5f4-4a47-a7bf-354b0d432b27","Type":"ContainerDied","Data":"62b91ae766226b0da7fe114136196e5dea194bad90be0b48d6f9d8c6e4102b25"} Mar 20 08:50:05 crc kubenswrapper[5136]: I0320 08:50:05.720612 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mrh6x" event={"ID":"fd07221a-a5f4-4a47-a7bf-354b0d432b27","Type":"ContainerStarted","Data":"627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b"} Mar 20 08:50:06 crc kubenswrapper[5136]: I0320 08:50:06.060445 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:50:06 crc kubenswrapper[5136]: I0320 08:50:06.068868 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566604-xrrdd"] Mar 20 08:50:06 crc kubenswrapper[5136]: I0320 08:50:06.410094 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="372179a0-537a-4126-97c1-2d6a045e8798" path="/var/lib/kubelet/pods/372179a0-537a-4126-97c1-2d6a045e8798/volumes" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.062674 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.067111 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.124917 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") pod \"570ecd59-555d-4f55-aed1-6fe547da30b1\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.125407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") pod \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.125468 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") pod \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\" (UID: \"fd07221a-a5f4-4a47-a7bf-354b0d432b27\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.125507 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") pod \"570ecd59-555d-4f55-aed1-6fe547da30b1\" (UID: \"570ecd59-555d-4f55-aed1-6fe547da30b1\") " Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.126405 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fd07221a-a5f4-4a47-a7bf-354b0d432b27" (UID: "fd07221a-a5f4-4a47-a7bf-354b0d432b27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.126462 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "570ecd59-555d-4f55-aed1-6fe547da30b1" (UID: "570ecd59-555d-4f55-aed1-6fe547da30b1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.132927 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5" (OuterVolumeSpecName: "kube-api-access-ml8b5") pod "570ecd59-555d-4f55-aed1-6fe547da30b1" (UID: "570ecd59-555d-4f55-aed1-6fe547da30b1"). InnerVolumeSpecName "kube-api-access-ml8b5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.147787 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8" (OuterVolumeSpecName: "kube-api-access-fhnn8") pod "fd07221a-a5f4-4a47-a7bf-354b0d432b27" (UID: "fd07221a-a5f4-4a47-a7bf-354b0d432b27"). InnerVolumeSpecName "kube-api-access-fhnn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227738 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhnn8\" (UniqueName: \"kubernetes.io/projected/fd07221a-a5f4-4a47-a7bf-354b0d432b27-kube-api-access-fhnn8\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227776 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fd07221a-a5f4-4a47-a7bf-354b0d432b27-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227791 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/570ecd59-555d-4f55-aed1-6fe547da30b1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.227801 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ml8b5\" (UniqueName: \"kubernetes.io/projected/570ecd59-555d-4f55-aed1-6fe547da30b1-kube-api-access-ml8b5\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.735876 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mrh6x" event={"ID":"fd07221a-a5f4-4a47-a7bf-354b0d432b27","Type":"ContainerDied","Data":"627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b"} Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.735922 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="627b05fc64cc627bdaf739a99048293083b7c8239c5cfe96ff7ea4ae31cfba6b" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.735936 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mrh6x" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.737129 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-tr2s5" event={"ID":"570ecd59-555d-4f55-aed1-6fe547da30b1","Type":"ContainerDied","Data":"b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db"} Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.737149 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5e368ab159d78efbf2d3dddd014cf67bfd2bb9cb1b7832497e86f66a6e889db" Mar 20 08:50:07 crc kubenswrapper[5136]: I0320 08:50:07.737196 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-tr2s5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.247784 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 08:50:09 crc kubenswrapper[5136]: E0320 08:50:09.252439 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerName="mariadb-database-create" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252466 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerName="mariadb-database-create" Mar 20 08:50:09 crc kubenswrapper[5136]: E0320 08:50:09.252496 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerName="mariadb-account-create-update" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252506 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerName="mariadb-account-create-update" Mar 20 08:50:09 crc kubenswrapper[5136]: E0320 08:50:09.252524 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerName="oc" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252533 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerName="oc" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252772 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" containerName="oc" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252804 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" containerName="mariadb-account-create-update" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.252860 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" containerName="mariadb-database-create" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.253572 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.256162 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zsfgx" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.258164 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.259459 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360617 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360783 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.360827 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.462504 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.462748 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.462794 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.463127 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.475698 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.476034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.476483 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.484356 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"glance-db-sync-dlmp5\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:09 crc kubenswrapper[5136]: I0320 08:50:09.598070 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:10 crc kubenswrapper[5136]: I0320 08:50:10.225537 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 08:50:10 crc kubenswrapper[5136]: I0320 08:50:10.766493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerStarted","Data":"4b2eb053d92eb0ee6c5af951975f69f5603c187dedf27f124fbc5e104145fc8e"} Mar 20 08:50:26 crc kubenswrapper[5136]: I0320 08:50:26.929595 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerStarted","Data":"ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138"} Mar 20 08:50:26 crc kubenswrapper[5136]: I0320 08:50:26.948790 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-dlmp5" podStartSLOduration=2.36994352 podStartE2EDuration="17.948771606s" podCreationTimestamp="2026-03-20 08:50:09 +0000 UTC" firstStartedPulling="2026-03-20 08:50:10.227788819 +0000 UTC m=+7242.487099970" lastFinishedPulling="2026-03-20 08:50:25.806616905 +0000 UTC m=+7258.065928056" observedRunningTime="2026-03-20 08:50:26.947649242 +0000 UTC m=+7259.206960433" watchObservedRunningTime="2026-03-20 08:50:26.948771606 +0000 UTC m=+7259.208082757" Mar 20 08:50:29 crc kubenswrapper[5136]: I0320 08:50:29.953162 5136 generic.go:334] "Generic (PLEG): container finished" podID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerID="ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138" exitCode=0 Mar 20 08:50:29 crc kubenswrapper[5136]: I0320 08:50:29.953255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerDied","Data":"ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138"} Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.427303 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.590975 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.591056 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.591139 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.591369 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") pod \"d0757343-a168-444b-ab9f-eb32dc3e416a\" (UID: \"d0757343-a168-444b-ab9f-eb32dc3e416a\") " Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.602937 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh" (OuterVolumeSpecName: "kube-api-access-rjrdh") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "kube-api-access-rjrdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.617446 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.628021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.649121 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data" (OuterVolumeSpecName: "config-data") pod "d0757343-a168-444b-ab9f-eb32dc3e416a" (UID: "d0757343-a168-444b-ab9f-eb32dc3e416a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694781 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjrdh\" (UniqueName: \"kubernetes.io/projected/d0757343-a168-444b-ab9f-eb32dc3e416a-kube-api-access-rjrdh\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694840 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694853 5136 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.694866 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d0757343-a168-444b-ab9f-eb32dc3e416a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.972203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-dlmp5" event={"ID":"d0757343-a168-444b-ab9f-eb32dc3e416a","Type":"ContainerDied","Data":"4b2eb053d92eb0ee6c5af951975f69f5603c187dedf27f124fbc5e104145fc8e"} Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.972242 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2eb053d92eb0ee6c5af951975f69f5603c187dedf27f124fbc5e104145fc8e" Mar 20 08:50:31 crc kubenswrapper[5136]: I0320 08:50:31.972314 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-dlmp5" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.279779 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: E0320 08:50:32.280161 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerName="glance-db-sync" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.280185 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerName="glance-db-sync" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.280334 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" containerName="glance-db-sync" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.281238 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.286868 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.286922 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zsfgx" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.287125 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.296772 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405405 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.405438 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.420625 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.422520 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.451951 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.493132 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.495845 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.501486 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.504643 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509145 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509220 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509264 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509298 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509318 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509345 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509404 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509427 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509454 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509479 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509507 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509547 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509588 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509631 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509652 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.509679 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.510446 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.510734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.515212 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.515317 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.517406 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.563931 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"glance-default-external-api-0\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.603906 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611767 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611825 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611876 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611904 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611929 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611946 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.611973 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612003 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612028 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612065 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612364 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.612397 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.613120 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.613569 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.613929 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.614958 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.618765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.620365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.624554 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.629760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"glance-default-internal-api-0\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.637553 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dnsmasq-dns-749cf87df7-5r4jn\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.745272 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:32 crc kubenswrapper[5136]: I0320 08:50:32.825536 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:33 crc kubenswrapper[5136]: I0320 08:50:33.246548 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:33 crc kubenswrapper[5136]: I0320 08:50:33.269017 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:50:33 crc kubenswrapper[5136]: W0320 08:50:33.270715 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddde4894e_af50_4137_8cb4_469a0363b248.slice/crio-82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695 WatchSource:0}: Error finding container 82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695: Status 404 returned error can't find the container with id 82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695 Mar 20 08:50:33 crc kubenswrapper[5136]: I0320 08:50:33.455581 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.001177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerStarted","Data":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.001415 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerStarted","Data":"2524d6e15b5de7a3de1e6beebb9c23779a7eb1301d5ef55a2709fe103508da76"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.025933 5136 generic.go:334] "Generic (PLEG): container finished" podID="dde4894e-af50-4137-8cb4-469a0363b248" containerID="38d62cb7f741d8ee572a6c46fb9a977b9c469e6392e17bbc74b7fe94c516a377" exitCode=0 Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.026275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerDied","Data":"38d62cb7f741d8ee572a6c46fb9a977b9c469e6392e17bbc74b7fe94c516a377"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.026309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerStarted","Data":"82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695"} Mar 20 08:50:34 crc kubenswrapper[5136]: I0320 08:50:34.027557 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.060639 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerStarted","Data":"0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.061297 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.068382 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerStarted","Data":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.068650 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerStarted","Data":"636e3949d027a18021c284e372d93d822e803b424afcc4fb424553c251ed3c72"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.076017 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerStarted","Data":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.076154 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" containerID="cri-o://b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" gracePeriod=30 Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.076411 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" containerID="cri-o://edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" gracePeriod=30 Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.100778 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" podStartSLOduration=3.100757376 podStartE2EDuration="3.100757376s" podCreationTimestamp="2026-03-20 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:35.088605959 +0000 UTC m=+7267.347917120" watchObservedRunningTime="2026-03-20 08:50:35.100757376 +0000 UTC m=+7267.360068527" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.115337 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.115318266 podStartE2EDuration="3.115318266s" podCreationTimestamp="2026-03-20 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:35.107940467 +0000 UTC m=+7267.367251608" watchObservedRunningTime="2026-03-20 08:50:35.115318266 +0000 UTC m=+7267.374629417" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.673648 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.836273 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984344 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984508 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984559 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984636 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") pod \"a069cdd1-a76e-4977-b511-1776284ad9ba\" (UID: \"a069cdd1-a76e-4977-b511-1776284ad9ba\") " Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs" (OuterVolumeSpecName: "logs") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.984972 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.985106 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.985124 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a069cdd1-a76e-4977-b511-1776284ad9ba-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:35 crc kubenswrapper[5136]: I0320 08:50:35.994950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts" (OuterVolumeSpecName: "scripts") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.024709 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz" (OuterVolumeSpecName: "kube-api-access-8gfjz") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "kube-api-access-8gfjz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.030453 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.056998 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data" (OuterVolumeSpecName: "config-data") pod "a069cdd1-a76e-4977-b511-1776284ad9ba" (UID: "a069cdd1-a76e-4977-b511-1776284ad9ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086147 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086173 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086157 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerStarted","Data":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086183 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gfjz\" (UniqueName: \"kubernetes.io/projected/a069cdd1-a76e-4977-b511-1776284ad9ba-kube-api-access-8gfjz\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.086242 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a069cdd1-a76e-4977-b511-1776284ad9ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.088538 5136 generic.go:334] "Generic (PLEG): container finished" podID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" exitCode=0 Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.088563 5136 generic.go:334] "Generic (PLEG): container finished" podID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" exitCode=143 Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.088826 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089093 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerDied","Data":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089126 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerDied","Data":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a069cdd1-a76e-4977-b511-1776284ad9ba","Type":"ContainerDied","Data":"2524d6e15b5de7a3de1e6beebb9c23779a7eb1301d5ef55a2709fe103508da76"} Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.089192 5136 scope.go:117] "RemoveContainer" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.114433 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.114414698 podStartE2EDuration="4.114414698s" podCreationTimestamp="2026-03-20 08:50:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:36.110462036 +0000 UTC m=+7268.369773187" watchObservedRunningTime="2026-03-20 08:50:36.114414698 +0000 UTC m=+7268.373725849" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.126062 5136 scope.go:117] "RemoveContainer" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.141348 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.149044 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.153533 5136 scope.go:117] "RemoveContainer" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.153908 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": container with ID starting with edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5 not found: ID does not exist" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.153937 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} err="failed to get container status \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": rpc error: code = NotFound desc = could not find container \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": container with ID starting with edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5 not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.153957 5136 scope.go:117] "RemoveContainer" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.154586 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": container with ID starting with b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf not found: ID does not exist" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.154616 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} err="failed to get container status \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": rpc error: code = NotFound desc = could not find container \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": container with ID starting with b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.154630 5136 scope.go:117] "RemoveContainer" containerID="edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.155336 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5"} err="failed to get container status \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": rpc error: code = NotFound desc = could not find container \"edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5\": container with ID starting with edb7e04bf4d2ed9a087eee2792203a61ae0226a77f479a48682131c81c0a9fa5 not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.155358 5136 scope.go:117] "RemoveContainer" containerID="b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.155558 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf"} err="failed to get container status \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": rpc error: code = NotFound desc = could not find container \"b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf\": container with ID starting with b101c328819147841f24c356ec215802a7d2b8b2924e9c2b5272653d433217cf not found: ID does not exist" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160142 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.160559 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160579 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" Mar 20 08:50:36 crc kubenswrapper[5136]: E0320 08:50:36.160606 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160615 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.160806 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-log" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.163802 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" containerName="glance-httpd" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.165106 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.170015 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.170401 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.179671 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289608 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289673 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289733 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289883 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.289916 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.391355 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.391709 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392228 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392329 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392994 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.393058 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.392663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.393410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.396174 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.396836 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.397320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.398004 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.407629 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a069cdd1-a76e-4977-b511-1776284ad9ba" path="/var/lib/kubelet/pods/a069cdd1-a76e-4977-b511-1776284ad9ba/volumes" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.415641 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"glance-default-external-api-0\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " pod="openstack/glance-default-external-api-0" Mar 20 08:50:36 crc kubenswrapper[5136]: I0320 08:50:36.481980 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.100632 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" containerID="cri-o://4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" gracePeriod=30 Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.101189 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" containerID="cri-o://efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" gracePeriod=30 Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.102848 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:50:37 crc kubenswrapper[5136]: I0320 08:50:37.932987 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026436 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026487 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026535 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026554 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026572 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.026640 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") pod \"97199128-8701-401c-bb22-55e0f0239271\" (UID: \"97199128-8701-401c-bb22-55e0f0239271\") " Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.027291 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs" (OuterVolumeSpecName: "logs") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.027316 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.047495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll" (OuterVolumeSpecName: "kube-api-access-f4tll") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "kube-api-access-f4tll". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.048616 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts" (OuterVolumeSpecName: "scripts") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.057607 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.075240 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data" (OuterVolumeSpecName: "config-data") pod "97199128-8701-401c-bb22-55e0f0239271" (UID: "97199128-8701-401c-bb22-55e0f0239271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130583 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130622 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130636 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4tll\" (UniqueName: \"kubernetes.io/projected/97199128-8701-401c-bb22-55e0f0239271-kube-api-access-f4tll\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130648 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130667 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/97199128-8701-401c-bb22-55e0f0239271-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130679 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/97199128-8701-401c-bb22-55e0f0239271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130901 5136 generic.go:334] "Generic (PLEG): container finished" podID="97199128-8701-401c-bb22-55e0f0239271" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" exitCode=0 Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.130942 5136 generic.go:334] "Generic (PLEG): container finished" podID="97199128-8701-401c-bb22-55e0f0239271" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" exitCode=143 Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133199 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerDied","Data":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133245 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerDied","Data":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"97199128-8701-401c-bb22-55e0f0239271","Type":"ContainerDied","Data":"636e3949d027a18021c284e372d93d822e803b424afcc4fb424553c251ed3c72"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133323 5136 scope.go:117] "RemoveContainer" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.133561 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.155556 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerStarted","Data":"de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.155610 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerStarted","Data":"18bbdbf6b4b7085096b8a4c5650b4a999121b8fffe8ad31c3a29f6c89c1e9ff8"} Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.194205 5136 scope.go:117] "RemoveContainer" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.194536 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.206532 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.214656 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.215132 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215152 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.215176 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215183 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215336 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-httpd" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.215363 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="97199128-8701-401c-bb22-55e0f0239271" containerName="glance-log" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.216345 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.218486 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.218735 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.224951 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.247285 5136 scope.go:117] "RemoveContainer" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.248736 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": container with ID starting with efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69 not found: ID does not exist" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.248784 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} err="failed to get container status \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": rpc error: code = NotFound desc = could not find container \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": container with ID starting with efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.248895 5136 scope.go:117] "RemoveContainer" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.249905 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": container with ID starting with 4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3 not found: ID does not exist" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.249949 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} err="failed to get container status \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": rpc error: code = NotFound desc = could not find container \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": container with ID starting with 4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.249969 5136 scope.go:117] "RemoveContainer" containerID="efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.251105 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69"} err="failed to get container status \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": rpc error: code = NotFound desc = could not find container \"efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69\": container with ID starting with efdf53aefa1ad6d28bab83e86d596e2626a3b229ae59b4b4fdf69cbfaee69e69 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.251140 5136 scope.go:117] "RemoveContainer" containerID="4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.254328 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3"} err="failed to get container status \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": rpc error: code = NotFound desc = could not find container \"4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3\": container with ID starting with 4dfff9b43bf39194b5752539332af86345507ed936903efaeb6ad184c44a3dd3 not found: ID does not exist" Mar 20 08:50:38 crc kubenswrapper[5136]: E0320 08:50:38.267132 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod97199128_8701_401c_bb22_55e0f0239271.slice\": RecentStats: unable to find data in memory cache]" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333414 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333457 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333573 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333636 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333659 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.333777 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.409758 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97199128-8701-401c-bb22-55e0f0239271" path="/var/lib/kubelet/pods/97199128-8701-401c-bb22-55e0f0239271/volumes" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435108 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435144 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435192 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435229 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435297 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.435389 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.436747 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.436842 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.441055 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.441868 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.442008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.442287 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.456205 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"glance-default-internal-api-0\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:50:38 crc kubenswrapper[5136]: I0320 08:50:38.555571 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.051541 5136 scope.go:117] "RemoveContainer" containerID="a7d9dee7dfd341c20d54bcc9a10648dd04c5eaeec50a978661f3c530263c499e" Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.054732 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.130321 5136 scope.go:117] "RemoveContainer" containerID="c0135a379aa43c0ef1ad29a602be6db0384857bc32c56cdc2b2d0040cfa4649a" Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.168081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerStarted","Data":"0a442b375725a08359ac9c238f48642a4c758f6fef43750c9ef6734e62c274b1"} Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.175072 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerStarted","Data":"10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420"} Mar 20 08:50:39 crc kubenswrapper[5136]: I0320 08:50:39.205801 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.205779328 podStartE2EDuration="3.205779328s" podCreationTimestamp="2026-03-20 08:50:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:39.201197596 +0000 UTC m=+7271.460508747" watchObservedRunningTime="2026-03-20 08:50:39.205779328 +0000 UTC m=+7271.465090479" Mar 20 08:50:40 crc kubenswrapper[5136]: I0320 08:50:40.193546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerStarted","Data":"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c"} Mar 20 08:50:40 crc kubenswrapper[5136]: I0320 08:50:40.194138 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerStarted","Data":"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3"} Mar 20 08:50:40 crc kubenswrapper[5136]: I0320 08:50:40.216925 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=2.216910343 podStartE2EDuration="2.216910343s" podCreationTimestamp="2026-03-20 08:50:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:50:40.213296261 +0000 UTC m=+7272.472607412" watchObservedRunningTime="2026-03-20 08:50:40.216910343 +0000 UTC m=+7272.476221494" Mar 20 08:50:42 crc kubenswrapper[5136]: I0320 08:50:42.748035 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:50:42 crc kubenswrapper[5136]: I0320 08:50:42.817806 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:50:42 crc kubenswrapper[5136]: I0320 08:50:42.818056 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" containerID="cri-o://70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5" gracePeriod=10 Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.232345 5136 generic.go:334] "Generic (PLEG): container finished" podID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerID="70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5" exitCode=0 Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.232560 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerDied","Data":"70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5"} Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.284257 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.337864 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.337929 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.337976 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.338116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.338168 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") pod \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\" (UID: \"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22\") " Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.343828 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp" (OuterVolumeSpecName: "kube-api-access-hffqp") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "kube-api-access-hffqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.348644 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hffqp\" (UniqueName: \"kubernetes.io/projected/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-kube-api-access-hffqp\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.380531 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.380599 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.385841 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config" (OuterVolumeSpecName: "config") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.387010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" (UID: "6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450345 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450386 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450399 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:43 crc kubenswrapper[5136]: I0320 08:50:43.450410 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.249224 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" event={"ID":"6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22","Type":"ContainerDied","Data":"1060eb1db927e589f382cb4a2cb4756b677bac6f172c644881bb0448e3071e35"} Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.249301 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78db57ffd5-mzbfx" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.249608 5136 scope.go:117] "RemoveContainer" containerID="70907013083abb8f01ef74071f6df304ed026e839ff073ff1e663910330022e5" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.291740 5136 scope.go:117] "RemoveContainer" containerID="3ff6f40c02029e2b21fb76159d8a4a46d3d5ada3e12371991cb9ff0c2549f74e" Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.297981 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.314651 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78db57ffd5-mzbfx"] Mar 20 08:50:44 crc kubenswrapper[5136]: I0320 08:50:44.411049 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" path="/var/lib/kubelet/pods/6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22/volumes" Mar 20 08:50:45 crc kubenswrapper[5136]: I0320 08:50:45.822507 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:50:45 crc kubenswrapper[5136]: I0320 08:50:45.822597 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.482776 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.483136 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.531632 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:50:46 crc kubenswrapper[5136]: I0320 08:50:46.531737 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:50:47 crc kubenswrapper[5136]: I0320 08:50:47.282136 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:50:47 crc kubenswrapper[5136]: I0320 08:50:47.282220 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:50:48 crc kubenswrapper[5136]: I0320 08:50:48.744004 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-7bb4cc7c98-dzzhq" podUID="4c981a48-1ae6-4c06-90ed-4333de6a14d2" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.53:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.430379 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.430735 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.456643 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:49 crc kubenswrapper[5136]: I0320 08:50:49.458681 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.443490 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.443765 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.530262 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.530451 5136 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 20 08:50:50 crc kubenswrapper[5136]: I0320 08:50:50.546076 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:50:52 crc kubenswrapper[5136]: I0320 08:50:52.418737 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:50:52 crc kubenswrapper[5136]: I0320 08:50:52.446954 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.421138 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 08:51:00 crc kubenswrapper[5136]: E0320 08:51:00.421921 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="init" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.421935 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="init" Mar 20 08:51:00 crc kubenswrapper[5136]: E0320 08:51:00.421971 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.421977 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.422140 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bfb6fbd-6d0c-477e-ac3b-29b968b9eb22" containerName="dnsmasq-dns" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.422643 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.434931 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.517391 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.518414 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.520317 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.523327 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.523365 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.531656 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624655 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624917 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.624996 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.625667 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.646629 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"placement-db-create-85wqc\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.727136 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.727211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.729842 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.740136 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.745710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"placement-e4e3-account-create-update-htnkq\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:00 crc kubenswrapper[5136]: I0320 08:51:00.834645 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.206095 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.288172 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 08:51:01 crc kubenswrapper[5136]: W0320 08:51:01.292777 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb48a8f95_9236_458f_a8ab_fb15f6878172.slice/crio-5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb WatchSource:0}: Error finding container 5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb: Status 404 returned error can't find the container with id 5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.555320 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerStarted","Data":"f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.555374 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerStarted","Data":"5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.559542 5136 generic.go:334] "Generic (PLEG): container finished" podID="2204982c-c8aa-4b18-a455-71915264f644" containerID="97846ae11696236889350b3c9161e329e32ca1f71469f4c6bc5cd1b32b64434b" exitCode=0 Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.559579 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85wqc" event={"ID":"2204982c-c8aa-4b18-a455-71915264f644","Type":"ContainerDied","Data":"97846ae11696236889350b3c9161e329e32ca1f71469f4c6bc5cd1b32b64434b"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.559602 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85wqc" event={"ID":"2204982c-c8aa-4b18-a455-71915264f644","Type":"ContainerStarted","Data":"27c28f71bc6ec290e2c720dfcc5c38d23cb3d9f05968c5658b2c6d4079823d85"} Mar 20 08:51:01 crc kubenswrapper[5136]: I0320 08:51:01.583636 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-e4e3-account-create-update-htnkq" podStartSLOduration=1.583618223 podStartE2EDuration="1.583618223s" podCreationTimestamp="2026-03-20 08:51:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:01.572582162 +0000 UTC m=+7293.831893313" watchObservedRunningTime="2026-03-20 08:51:01.583618223 +0000 UTC m=+7293.842929374" Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.574788 5136 generic.go:334] "Generic (PLEG): container finished" podID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerID="f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8" exitCode=0 Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.574864 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerDied","Data":"f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8"} Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.879034 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.966286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") pod \"2204982c-c8aa-4b18-a455-71915264f644\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.966488 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") pod \"2204982c-c8aa-4b18-a455-71915264f644\" (UID: \"2204982c-c8aa-4b18-a455-71915264f644\") " Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.967533 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2204982c-c8aa-4b18-a455-71915264f644" (UID: "2204982c-c8aa-4b18-a455-71915264f644"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:02 crc kubenswrapper[5136]: I0320 08:51:02.973912 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl" (OuterVolumeSpecName: "kube-api-access-sg4fl") pod "2204982c-c8aa-4b18-a455-71915264f644" (UID: "2204982c-c8aa-4b18-a455-71915264f644"). InnerVolumeSpecName "kube-api-access-sg4fl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.069142 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sg4fl\" (UniqueName: \"kubernetes.io/projected/2204982c-c8aa-4b18-a455-71915264f644-kube-api-access-sg4fl\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.069425 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2204982c-c8aa-4b18-a455-71915264f644-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.594008 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-85wqc" event={"ID":"2204982c-c8aa-4b18-a455-71915264f644","Type":"ContainerDied","Data":"27c28f71bc6ec290e2c720dfcc5c38d23cb3d9f05968c5658b2c6d4079823d85"} Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.597006 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27c28f71bc6ec290e2c720dfcc5c38d23cb3d9f05968c5658b2c6d4079823d85" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.594074 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-85wqc" Mar 20 08:51:03 crc kubenswrapper[5136]: I0320 08:51:03.967805 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.091900 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") pod \"b48a8f95-9236-458f-a8ab-fb15f6878172\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.092051 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") pod \"b48a8f95-9236-458f-a8ab-fb15f6878172\" (UID: \"b48a8f95-9236-458f-a8ab-fb15f6878172\") " Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.092495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b48a8f95-9236-458f-a8ab-fb15f6878172" (UID: "b48a8f95-9236-458f-a8ab-fb15f6878172"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.096033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n" (OuterVolumeSpecName: "kube-api-access-26g8n") pod "b48a8f95-9236-458f-a8ab-fb15f6878172" (UID: "b48a8f95-9236-458f-a8ab-fb15f6878172"). InnerVolumeSpecName "kube-api-access-26g8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.194711 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b48a8f95-9236-458f-a8ab-fb15f6878172-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.194764 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26g8n\" (UniqueName: \"kubernetes.io/projected/b48a8f95-9236-458f-a8ab-fb15f6878172-kube-api-access-26g8n\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.605920 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-e4e3-account-create-update-htnkq" event={"ID":"b48a8f95-9236-458f-a8ab-fb15f6878172","Type":"ContainerDied","Data":"5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb"} Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.605955 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b0d6ec6575e4690b9ae3e03cb3f839ab86e11315534a360a50495ad74bfdceb" Mar 20 08:51:04 crc kubenswrapper[5136]: I0320 08:51:04.606007 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-e4e3-account-create-update-htnkq" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.071168 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:51:06 crc kubenswrapper[5136]: E0320 08:51:06.071933 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerName="mariadb-account-create-update" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.071954 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerName="mariadb-account-create-update" Mar 20 08:51:06 crc kubenswrapper[5136]: E0320 08:51:06.071996 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2204982c-c8aa-4b18-a455-71915264f644" containerName="mariadb-database-create" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.072005 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2204982c-c8aa-4b18-a455-71915264f644" containerName="mariadb-database-create" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.072168 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2204982c-c8aa-4b18-a455-71915264f644" containerName="mariadb-database-create" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.072186 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" containerName="mariadb-account-create-update" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.076421 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.095532 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.117509 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.118895 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.120937 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-97ndb" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.121228 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.121394 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.157992 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248119 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248196 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248230 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248259 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248323 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248367 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248394 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248456 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248480 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.248500 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.349696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350010 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350061 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350114 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350174 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350188 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350225 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.350252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.351121 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.352603 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.353762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.354024 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.354865 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.359710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.359997 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.362177 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.370607 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"placement-db-sync-2d5zx\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.376889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"dnsmasq-dns-5cdd4cf5b7-8vjw2\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.402567 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.440274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.897978 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:51:06 crc kubenswrapper[5136]: I0320 08:51:06.956505 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 08:51:06 crc kubenswrapper[5136]: W0320 08:51:06.969025 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a2341fa_02fc_4b08_a2a4_2272078db5d9.slice/crio-c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6 WatchSource:0}: Error finding container c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6: Status 404 returned error can't find the container with id c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6 Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.777499 5136 generic.go:334] "Generic (PLEG): container finished" podID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerID="174d06d5a4cd8a3ee5fe8c3756254a01a6a8554baf9bae2be57775301d65bd05" exitCode=0 Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.777547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerDied","Data":"174d06d5a4cd8a3ee5fe8c3756254a01a6a8554baf9bae2be57775301d65bd05"} Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.777848 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerStarted","Data":"3724ba25b3c3d5b60071b8d78e6fb6e8e43e3c7f75f11f016def345af42800c4"} Mar 20 08:51:07 crc kubenswrapper[5136]: I0320 08:51:07.779522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerStarted","Data":"c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6"} Mar 20 08:51:08 crc kubenswrapper[5136]: I0320 08:51:08.855645 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerStarted","Data":"11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d"} Mar 20 08:51:08 crc kubenswrapper[5136]: I0320 08:51:08.856244 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:08 crc kubenswrapper[5136]: I0320 08:51:08.880131 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" podStartSLOduration=2.880108205 podStartE2EDuration="2.880108205s" podCreationTimestamp="2026-03-20 08:51:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:08.876255936 +0000 UTC m=+7301.135567087" watchObservedRunningTime="2026-03-20 08:51:08.880108205 +0000 UTC m=+7301.139419356" Mar 20 08:51:11 crc kubenswrapper[5136]: I0320 08:51:11.879146 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerStarted","Data":"ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c"} Mar 20 08:51:11 crc kubenswrapper[5136]: I0320 08:51:11.904509 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-2d5zx" podStartSLOduration=1.974774254 podStartE2EDuration="5.90448774s" podCreationTimestamp="2026-03-20 08:51:06 +0000 UTC" firstStartedPulling="2026-03-20 08:51:06.971988089 +0000 UTC m=+7299.231299230" lastFinishedPulling="2026-03-20 08:51:10.901701575 +0000 UTC m=+7303.161012716" observedRunningTime="2026-03-20 08:51:11.895583965 +0000 UTC m=+7304.154895146" watchObservedRunningTime="2026-03-20 08:51:11.90448774 +0000 UTC m=+7304.163798891" Mar 20 08:51:12 crc kubenswrapper[5136]: I0320 08:51:12.892221 5136 generic.go:334] "Generic (PLEG): container finished" podID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerID="ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c" exitCode=0 Mar 20 08:51:12 crc kubenswrapper[5136]: I0320 08:51:12.892261 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerDied","Data":"ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c"} Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.337203 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467314 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467652 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467736 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467762 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467916 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") pod \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\" (UID: \"6a2341fa-02fc-4b08-a2a4-2272078db5d9\") " Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.467960 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs" (OuterVolumeSpecName: "logs") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.468399 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a2341fa-02fc-4b08-a2a4-2272078db5d9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.475261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts" (OuterVolumeSpecName: "scripts") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.487101 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm" (OuterVolumeSpecName: "kube-api-access-2z9vm") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "kube-api-access-2z9vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.494788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.501471 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data" (OuterVolumeSpecName: "config-data") pod "6a2341fa-02fc-4b08-a2a4-2272078db5d9" (UID: "6a2341fa-02fc-4b08-a2a4-2272078db5d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572668 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z9vm\" (UniqueName: \"kubernetes.io/projected/6a2341fa-02fc-4b08-a2a4-2272078db5d9-kube-api-access-2z9vm\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572731 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572744 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.572757 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a2341fa-02fc-4b08-a2a4-2272078db5d9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.911919 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-2d5zx" event={"ID":"6a2341fa-02fc-4b08-a2a4-2272078db5d9","Type":"ContainerDied","Data":"c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6"} Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.911966 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3ddb769b7a99226c67cb9c4282c1e01b292ce035d09fab727db494ec6b52bc6" Mar 20 08:51:14 crc kubenswrapper[5136]: I0320 08:51:14.912065 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-2d5zx" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.136909 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 08:51:15 crc kubenswrapper[5136]: E0320 08:51:15.137643 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerName="placement-db-sync" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.137691 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerName="placement-db-sync" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.138154 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" containerName="placement-db-sync" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.140389 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.148362 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.150737 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.150894 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-97ndb" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.151209 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.151972 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.152150 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.285781 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286120 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286231 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286593 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.286704 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.287005 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.287207 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388688 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388743 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388795 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.388976 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.389005 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.389033 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.389465 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.392663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.392864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.393188 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.396528 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.396735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.408470 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"placement-674ffbb556-dfk75\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.464601 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.822325 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.822708 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:51:15 crc kubenswrapper[5136]: I0320 08:51:15.998498 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 08:51:16 crc kubenswrapper[5136]: W0320 08:51:16.005929 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db26f77_c83b_4eb6_b513_6b0b2be6ebeb.slice/crio-102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255 WatchSource:0}: Error finding container 102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255: Status 404 returned error can't find the container with id 102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255 Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.412774 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.485147 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.485904 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" containerID="cri-o://0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690" gracePeriod=10 Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.940622 5136 generic.go:334] "Generic (PLEG): container finished" podID="dde4894e-af50-4137-8cb4-469a0363b248" containerID="0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690" exitCode=0 Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.940710 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerDied","Data":"0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943349 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerStarted","Data":"9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943392 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerStarted","Data":"b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943403 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerStarted","Data":"102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255"} Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943905 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.943953 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:16 crc kubenswrapper[5136]: I0320 08:51:16.974689 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-674ffbb556-dfk75" podStartSLOduration=1.9746687939999998 podStartE2EDuration="1.974668794s" podCreationTimestamp="2026-03-20 08:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:51:16.965929763 +0000 UTC m=+7309.225240954" watchObservedRunningTime="2026-03-20 08:51:16.974668794 +0000 UTC m=+7309.233979955" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.046389 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226635 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226692 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.226905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") pod \"dde4894e-af50-4137-8cb4-469a0363b248\" (UID: \"dde4894e-af50-4137-8cb4-469a0363b248\") " Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.238974 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm" (OuterVolumeSpecName: "kube-api-access-xkqpm") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "kube-api-access-xkqpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.268223 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config" (OuterVolumeSpecName: "config") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.285214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.289192 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.298551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dde4894e-af50-4137-8cb4-469a0363b248" (UID: "dde4894e-af50-4137-8cb4-469a0363b248"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329073 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329107 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkqpm\" (UniqueName: \"kubernetes.io/projected/dde4894e-af50-4137-8cb4-469a0363b248-kube-api-access-xkqpm\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329119 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329190 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.329200 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dde4894e-af50-4137-8cb4-469a0363b248-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.953673 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" event={"ID":"dde4894e-af50-4137-8cb4-469a0363b248","Type":"ContainerDied","Data":"82b5c6a87292a840c7626e0d30263221c4e7065fd0f2c1e7d5b5956e5a5ba695"} Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.953725 5136 scope.go:117] "RemoveContainer" containerID="0ebc36c7db7ba6a4fd167019fee7e33605b31fe2474b4deaee19fad2dbe59690" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.953859 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-749cf87df7-5r4jn" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.990914 5136 scope.go:117] "RemoveContainer" containerID="38d62cb7f741d8ee572a6c46fb9a977b9c469e6392e17bbc74b7fe94c516a377" Mar 20 08:51:17 crc kubenswrapper[5136]: I0320 08:51:17.994159 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:51:18 crc kubenswrapper[5136]: I0320 08:51:18.001277 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-749cf87df7-5r4jn"] Mar 20 08:51:18 crc kubenswrapper[5136]: I0320 08:51:18.412868 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dde4894e-af50-4137-8cb4-469a0363b248" path="/var/lib/kubelet/pods/dde4894e-af50-4137-8cb4-469a0363b248/volumes" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.822434 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823044 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823102 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823887 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:51:45 crc kubenswrapper[5136]: I0320 08:51:45.823952 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62" gracePeriod=600 Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195219 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62" exitCode=0 Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195268 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62"} Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195327 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2"} Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.195378 5136 scope.go:117] "RemoveContainer" containerID="30a18324e99d184d144692bea57a583c7091e00e48958453a587d0287516835a" Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.518902 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:51:46 crc kubenswrapper[5136]: I0320 08:51:46.545892 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-674ffbb556-dfk75" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.129722 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:52:00 crc kubenswrapper[5136]: E0320 08:52:00.130618 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.130631 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" Mar 20 08:52:00 crc kubenswrapper[5136]: E0320 08:52:00.130642 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="init" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.130648 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="init" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.130854 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dde4894e-af50-4137-8cb4-469a0363b248" containerName="dnsmasq-dns" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.131410 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.133960 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.134188 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.134551 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.136893 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.231628 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"auto-csr-approver-29566612-gmnm6\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.333358 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"auto-csr-approver-29566612-gmnm6\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.352833 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"auto-csr-approver-29566612-gmnm6\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.479064 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:00 crc kubenswrapper[5136]: I0320 08:52:00.891368 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:52:00 crc kubenswrapper[5136]: W0320 08:52:00.895793 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod474fd165_50ec_4d02_9f52_eb18382cee27.slice/crio-9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8 WatchSource:0}: Error finding container 9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8: Status 404 returned error can't find the container with id 9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8 Mar 20 08:52:01 crc kubenswrapper[5136]: I0320 08:52:01.323249 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" event={"ID":"474fd165-50ec-4d02-9f52-eb18382cee27","Type":"ContainerStarted","Data":"9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8"} Mar 20 08:52:03 crc kubenswrapper[5136]: I0320 08:52:03.348587 5136 generic.go:334] "Generic (PLEG): container finished" podID="474fd165-50ec-4d02-9f52-eb18382cee27" containerID="02bf2fddb0787ba56f7a7d4d2929f25e0b16aff46d2b34aac1bc69f87f328612" exitCode=0 Mar 20 08:52:03 crc kubenswrapper[5136]: I0320 08:52:03.348670 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" event={"ID":"474fd165-50ec-4d02-9f52-eb18382cee27","Type":"ContainerDied","Data":"02bf2fddb0787ba56f7a7d4d2929f25e0b16aff46d2b34aac1bc69f87f328612"} Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.664675 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.717949 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") pod \"474fd165-50ec-4d02-9f52-eb18382cee27\" (UID: \"474fd165-50ec-4d02-9f52-eb18382cee27\") " Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.723674 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh" (OuterVolumeSpecName: "kube-api-access-pvvwh") pod "474fd165-50ec-4d02-9f52-eb18382cee27" (UID: "474fd165-50ec-4d02-9f52-eb18382cee27"). InnerVolumeSpecName "kube-api-access-pvvwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:04 crc kubenswrapper[5136]: I0320 08:52:04.819659 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvvwh\" (UniqueName: \"kubernetes.io/projected/474fd165-50ec-4d02-9f52-eb18382cee27-kube-api-access-pvvwh\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.367660 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" event={"ID":"474fd165-50ec-4d02-9f52-eb18382cee27","Type":"ContainerDied","Data":"9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8"} Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.367977 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9fd84e8a82eb9a78055f84ab61d652223315364db9d7623732f6b7dbc837c1b8" Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.367744 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566612-gmnm6" Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.742422 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:52:05 crc kubenswrapper[5136]: I0320 08:52:05.752687 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566606-rdf48"] Mar 20 08:52:06 crc kubenswrapper[5136]: I0320 08:52:06.409585 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="895f2400-9932-4967-831f-f047de8c0f63" path="/var/lib/kubelet/pods/895f2400-9932-4967-831f-f047de8c0f63/volumes" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.270430 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 08:52:08 crc kubenswrapper[5136]: E0320 08:52:08.271104 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" containerName="oc" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.271116 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" containerName="oc" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.271465 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" containerName="oc" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.272076 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.277606 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.354478 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.355788 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.364054 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.381724 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.381832 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.470289 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.472008 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.478228 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.483689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.483889 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.483967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.484032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.484662 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.490633 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.508395 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"nova-api-db-create-plxtl\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.573679 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.574701 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.580960 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586344 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586397 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586442 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.586488 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.587322 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.589746 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.614730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"nova-cell0-db-create-hkzk7\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.683196 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.686854 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.687847 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.687928 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.687988 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.688098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.688124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.688951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.690938 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.712174 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.713842 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"nova-api-c7dc-account-create-update-6rchx\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.788552 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789504 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789635 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789676 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.789737 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.790806 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.813012 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"nova-cell1-db-create-m289f\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.880995 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.882146 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.891150 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.891177 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.892276 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.892397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.893099 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.898287 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.915247 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"nova-cell0-adbe-account-create-update-bp9vg\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.995247 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:08 crc kubenswrapper[5136]: I0320 08:52:08.995418 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.091658 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.097549 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.097654 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.098786 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.106397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.116556 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"nova-cell1-e664-account-create-update-42278\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.222562 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.295668 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.387995 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.399918 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60ddf395_2544_4ebe_b1e2_37321af6438e.slice/crio-49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434 WatchSource:0}: Error finding container 49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434: Status 404 returned error can't find the container with id 49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434 Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.427368 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hkzk7" event={"ID":"d573f1ae-c37f-487a-a059-5200647084d4","Type":"ContainerStarted","Data":"b26967ed75a461b0b6b84a2af08132666c292fa50f89626a54d371c7b7fd4406"} Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.430399 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerStarted","Data":"c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298"} Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.430515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerStarted","Data":"130bfcae1dadc538ccba696a9309fd26a26d10c1831017abf58931cd6bcfc9d4"} Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.456103 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-plxtl" podStartSLOduration=1.456085183 podStartE2EDuration="1.456085183s" podCreationTimestamp="2026-03-20 08:52:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:09.450316074 +0000 UTC m=+7361.709627215" watchObservedRunningTime="2026-03-20 08:52:09.456085183 +0000 UTC m=+7361.715396334" Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.516357 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.516708 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4d3e02e_0f46_48dd_b9ef_8cb0135eabb4.slice/crio-cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3 WatchSource:0}: Error finding container cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3: Status 404 returned error can't find the container with id cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3 Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.676889 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.696556 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda725d785_3630_4adc_8417_15fceaecb250.slice/crio-f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1 WatchSource:0}: Error finding container f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1: Status 404 returned error can't find the container with id f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1 Mar 20 08:52:09 crc kubenswrapper[5136]: I0320 08:52:09.780267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 08:52:09 crc kubenswrapper[5136]: W0320 08:52:09.781142 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d18b334_bb20_43b9_8322_c2e847b74703.slice/crio-afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7 WatchSource:0}: Error finding container afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7: Status 404 returned error can't find the container with id afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.439076 5136 generic.go:334] "Generic (PLEG): container finished" podID="901ef065-f425-4ab7-b726-7d98704a58f8" containerID="c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.439262 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerDied","Data":"c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.441328 5136 generic.go:334] "Generic (PLEG): container finished" podID="7d18b334-bb20-43b9-8322-c2e847b74703" containerID="f77e438e3702b6de098fbd305814d9a4eb3df2f7161e741a8bd1bf247fc8becb" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.441473 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-42278" event={"ID":"7d18b334-bb20-43b9-8322-c2e847b74703","Type":"ContainerDied","Data":"f77e438e3702b6de098fbd305814d9a4eb3df2f7161e741a8bd1bf247fc8becb"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.441503 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-42278" event={"ID":"7d18b334-bb20-43b9-8322-c2e847b74703","Type":"ContainerStarted","Data":"afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.442835 5136 generic.go:334] "Generic (PLEG): container finished" podID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerID="6de711a276196e50b3e83c58fdab583ad6f7407fccb722557535f82c9abd51a7" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.442904 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m289f" event={"ID":"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4","Type":"ContainerDied","Data":"6de711a276196e50b3e83c58fdab583ad6f7407fccb722557535f82c9abd51a7"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.442930 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m289f" event={"ID":"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4","Type":"ContainerStarted","Data":"cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.444522 5136 generic.go:334] "Generic (PLEG): container finished" podID="d573f1ae-c37f-487a-a059-5200647084d4" containerID="cee664fd2e2a84523a4d0f3b3405435f0b03db0425ff048065d98c5612016681" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.444581 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hkzk7" event={"ID":"d573f1ae-c37f-487a-a059-5200647084d4","Type":"ContainerDied","Data":"cee664fd2e2a84523a4d0f3b3405435f0b03db0425ff048065d98c5612016681"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.446320 5136 generic.go:334] "Generic (PLEG): container finished" podID="a725d785-3630-4adc-8417-15fceaecb250" containerID="62941df7329d036b75c1f4c804a7915f68955eff793a634ef29d9182d34a9d9d" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.446375 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" event={"ID":"a725d785-3630-4adc-8417-15fceaecb250","Type":"ContainerDied","Data":"62941df7329d036b75c1f4c804a7915f68955eff793a634ef29d9182d34a9d9d"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.446394 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" event={"ID":"a725d785-3630-4adc-8417-15fceaecb250","Type":"ContainerStarted","Data":"f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.448643 5136 generic.go:334] "Generic (PLEG): container finished" podID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerID="d695b9c2dbcf5b99f4e58724aa314335827d63b932809de7ba7a6c3af214ccca" exitCode=0 Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.448834 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-6rchx" event={"ID":"60ddf395-2544-4ebe-b1e2-37321af6438e","Type":"ContainerDied","Data":"d695b9c2dbcf5b99f4e58724aa314335827d63b932809de7ba7a6c3af214ccca"} Mar 20 08:52:10 crc kubenswrapper[5136]: I0320 08:52:10.448962 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-6rchx" event={"ID":"60ddf395-2544-4ebe-b1e2-37321af6438e","Type":"ContainerStarted","Data":"49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434"} Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.872399 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.958199 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") pod \"901ef065-f425-4ab7-b726-7d98704a58f8\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.958331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") pod \"901ef065-f425-4ab7-b726-7d98704a58f8\" (UID: \"901ef065-f425-4ab7-b726-7d98704a58f8\") " Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.959030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "901ef065-f425-4ab7-b726-7d98704a58f8" (UID: "901ef065-f425-4ab7-b726-7d98704a58f8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:11 crc kubenswrapper[5136]: I0320 08:52:11.964025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7" (OuterVolumeSpecName: "kube-api-access-n8fv7") pod "901ef065-f425-4ab7-b726-7d98704a58f8" (UID: "901ef065-f425-4ab7-b726-7d98704a58f8"). InnerVolumeSpecName "kube-api-access-n8fv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.039136 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.047941 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.058493 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.060171 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8fv7\" (UniqueName: \"kubernetes.io/projected/901ef065-f425-4ab7-b726-7d98704a58f8-kube-api-access-n8fv7\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.061092 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/901ef065-f425-4ab7-b726-7d98704a58f8-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.072229 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.089666 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162144 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") pod \"a725d785-3630-4adc-8417-15fceaecb250\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162413 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") pod \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162553 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") pod \"7d18b334-bb20-43b9-8322-c2e847b74703\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162626 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") pod \"d573f1ae-c37f-487a-a059-5200647084d4\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") pod \"d573f1ae-c37f-487a-a059-5200647084d4\" (UID: \"d573f1ae-c37f-487a-a059-5200647084d4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162773 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") pod \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\" (UID: \"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162871 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") pod \"a725d785-3630-4adc-8417-15fceaecb250\" (UID: \"a725d785-3630-4adc-8417-15fceaecb250\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162941 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") pod \"60ddf395-2544-4ebe-b1e2-37321af6438e\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163036 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") pod \"7d18b334-bb20-43b9-8322-c2e847b74703\" (UID: \"7d18b334-bb20-43b9-8322-c2e847b74703\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.162978 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a725d785-3630-4adc-8417-15fceaecb250" (UID: "a725d785-3630-4adc-8417-15fceaecb250"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163184 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") pod \"60ddf395-2544-4ebe-b1e2-37321af6438e\" (UID: \"60ddf395-2544-4ebe-b1e2-37321af6438e\") " Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163349 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" (UID: "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163456 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "60ddf395-2544-4ebe-b1e2-37321af6438e" (UID: "60ddf395-2544-4ebe-b1e2-37321af6438e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7d18b334-bb20-43b9-8322-c2e847b74703" (UID: "7d18b334-bb20-43b9-8322-c2e847b74703"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.163677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d573f1ae-c37f-487a-a059-5200647084d4" (UID: "d573f1ae-c37f-487a-a059-5200647084d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164215 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a725d785-3630-4adc-8417-15fceaecb250-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164236 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d573f1ae-c37f-487a-a059-5200647084d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164249 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164261 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/60ddf395-2544-4ebe-b1e2-37321af6438e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.164272 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7d18b334-bb20-43b9-8322-c2e847b74703-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.165584 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm" (OuterVolumeSpecName: "kube-api-access-b4vdm") pod "7d18b334-bb20-43b9-8322-c2e847b74703" (UID: "7d18b334-bb20-43b9-8322-c2e847b74703"). InnerVolumeSpecName "kube-api-access-b4vdm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.165638 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm" (OuterVolumeSpecName: "kube-api-access-lkrtm") pod "a725d785-3630-4adc-8417-15fceaecb250" (UID: "a725d785-3630-4adc-8417-15fceaecb250"). InnerVolumeSpecName "kube-api-access-lkrtm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.166275 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs" (OuterVolumeSpecName: "kube-api-access-7cpxs") pod "60ddf395-2544-4ebe-b1e2-37321af6438e" (UID: "60ddf395-2544-4ebe-b1e2-37321af6438e"). InnerVolumeSpecName "kube-api-access-7cpxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.172434 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b" (OuterVolumeSpecName: "kube-api-access-lps9b") pod "d573f1ae-c37f-487a-a059-5200647084d4" (UID: "d573f1ae-c37f-487a-a059-5200647084d4"). InnerVolumeSpecName "kube-api-access-lps9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.172712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z" (OuterVolumeSpecName: "kube-api-access-mvk5z") pod "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" (UID: "a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4"). InnerVolumeSpecName "kube-api-access-mvk5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265616 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cpxs\" (UniqueName: \"kubernetes.io/projected/60ddf395-2544-4ebe-b1e2-37321af6438e-kube-api-access-7cpxs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265652 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvk5z\" (UniqueName: \"kubernetes.io/projected/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4-kube-api-access-mvk5z\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265663 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4vdm\" (UniqueName: \"kubernetes.io/projected/7d18b334-bb20-43b9-8322-c2e847b74703-kube-api-access-b4vdm\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265672 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lps9b\" (UniqueName: \"kubernetes.io/projected/d573f1ae-c37f-487a-a059-5200647084d4-kube-api-access-lps9b\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.265681 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lkrtm\" (UniqueName: \"kubernetes.io/projected/a725d785-3630-4adc-8417-15fceaecb250-kube-api-access-lkrtm\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.492093 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-42278" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.492095 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-42278" event={"ID":"7d18b334-bb20-43b9-8322-c2e847b74703","Type":"ContainerDied","Data":"afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.492213 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afe76f1345656133cdb14a59e1f33e57bdc93f930586f7e085946f3bc6cc21b7" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.499340 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-m289f" event={"ID":"a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4","Type":"ContainerDied","Data":"cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.499377 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd70c9a0713b408c842d07d992c8c3097039b721f58135470605467a4371ebe3" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.499702 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-m289f" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.501424 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-hkzk7" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.501446 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-hkzk7" event={"ID":"d573f1ae-c37f-487a-a059-5200647084d4","Type":"ContainerDied","Data":"b26967ed75a461b0b6b84a2af08132666c292fa50f89626a54d371c7b7fd4406"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.501474 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b26967ed75a461b0b6b84a2af08132666c292fa50f89626a54d371c7b7fd4406" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.504528 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.505046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-bp9vg" event={"ID":"a725d785-3630-4adc-8417-15fceaecb250","Type":"ContainerDied","Data":"f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.505091 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9ed4af648c1414222b9a36fceb63b8c45467b10998fb09dffaabe3cb6e99ef1" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.506747 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-6rchx" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.506783 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-6rchx" event={"ID":"60ddf395-2544-4ebe-b1e2-37321af6438e","Type":"ContainerDied","Data":"49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.506889 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49526f5be9e7a87b41848be0a0ee392bbffb7310111fa0630432dec6b922a434" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.509459 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-plxtl" event={"ID":"901ef065-f425-4ab7-b726-7d98704a58f8","Type":"ContainerDied","Data":"130bfcae1dadc538ccba696a9309fd26a26d10c1831017abf58931cd6bcfc9d4"} Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.509487 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="130bfcae1dadc538ccba696a9309fd26a26d10c1831017abf58931cd6bcfc9d4" Mar 20 08:52:12 crc kubenswrapper[5136]: I0320 08:52:12.509501 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-plxtl" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.073663 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074296 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a725d785-3630-4adc-8417-15fceaecb250" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074311 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a725d785-3630-4adc-8417-15fceaecb250" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074327 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074333 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074343 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074348 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074359 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074368 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074381 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d573f1ae-c37f-487a-a059-5200647084d4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074388 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d573f1ae-c37f-487a-a059-5200647084d4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: E0320 08:52:14.074415 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074421 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074563 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074579 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074587 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074599 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a725d785-3630-4adc-8417-15fceaecb250" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074608 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d573f1ae-c37f-487a-a059-5200647084d4" containerName="mariadb-database-create" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.074613 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" containerName="mariadb-account-create-update" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.075167 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.077718 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nn865" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.077932 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.078235 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.120992 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.209943 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.210028 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.210073 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.210119 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312620 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312861 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.312943 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.319849 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.320151 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.321479 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.341522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"nova-cell0-conductor-db-sync-4m6bk\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.401592 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:14 crc kubenswrapper[5136]: I0320 08:52:14.898135 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 08:52:15 crc kubenswrapper[5136]: I0320 08:52:15.540937 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerStarted","Data":"2a31760774dbb636e652192217a1a8550c415372b7e0c26014f90746e934f305"} Mar 20 08:52:24 crc kubenswrapper[5136]: I0320 08:52:24.624355 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerStarted","Data":"7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c"} Mar 20 08:52:29 crc kubenswrapper[5136]: I0320 08:52:29.660988 5136 generic.go:334] "Generic (PLEG): container finished" podID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerID="7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c" exitCode=0 Mar 20 08:52:29 crc kubenswrapper[5136]: I0320 08:52:29.661085 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerDied","Data":"7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c"} Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.029687 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138647 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138750 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138810 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.138994 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") pod \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\" (UID: \"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2\") " Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.144716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts" (OuterVolumeSpecName: "scripts") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.147624 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2" (OuterVolumeSpecName: "kube-api-access-72gh2") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "kube-api-access-72gh2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.171494 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data" (OuterVolumeSpecName: "config-data") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.172003 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" (UID: "7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240289 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72gh2\" (UniqueName: \"kubernetes.io/projected/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-kube-api-access-72gh2\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240327 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240337 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.240348 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.684292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" event={"ID":"7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2","Type":"ContainerDied","Data":"2a31760774dbb636e652192217a1a8550c415372b7e0c26014f90746e934f305"} Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.684368 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-4m6bk" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.684392 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a31760774dbb636e652192217a1a8550c415372b7e0c26014f90746e934f305" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.831563 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 08:52:31 crc kubenswrapper[5136]: E0320 08:52:31.832003 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerName="nova-cell0-conductor-db-sync" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.832019 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerName="nova-cell0-conductor-db-sync" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.832187 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" containerName="nova-cell0-conductor-db-sync" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.832806 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.834711 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.837984 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-nn865" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.847936 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.854300 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.854591 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.854794 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.956042 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.956142 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.956193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.962002 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.962381 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:31 crc kubenswrapper[5136]: I0320 08:52:31.976272 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"nova-cell0-conductor-0\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:32 crc kubenswrapper[5136]: I0320 08:52:32.206681 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:32 crc kubenswrapper[5136]: I0320 08:52:32.688990 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 08:52:32 crc kubenswrapper[5136]: W0320 08:52:32.696892 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11508a60_8214_4811_898f_9542eee208d5.slice/crio-a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d WatchSource:0}: Error finding container a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d: Status 404 returned error can't find the container with id a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.706292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerStarted","Data":"2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c"} Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.706774 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.706791 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerStarted","Data":"a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d"} Mar 20 08:52:33 crc kubenswrapper[5136]: I0320 08:52:33.729296 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.72927874 podStartE2EDuration="2.72927874s" podCreationTimestamp="2026-03-20 08:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:33.721477408 +0000 UTC m=+7385.980788559" watchObservedRunningTime="2026-03-20 08:52:33.72927874 +0000 UTC m=+7385.988589891" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.232346 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.715153 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.718284 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.745220 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.757268 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.757379 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.860597 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.863150 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.868743 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872035 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872104 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872143 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872225 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.872305 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.886249 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.893978 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.895505 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.905120 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.910716 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976255 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976285 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976318 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976340 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.976401 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:37 crc kubenswrapper[5136]: I0320 08:52:37.993894 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:37.995413 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:37.997899 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.003279 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.004103 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.011677 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.012522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.013619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.027103 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"nova-scheduler-0\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.027499 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"nova-cell0-cell-mapping-mdczc\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.035890 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.077732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.077789 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.077881 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.087079 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.089741 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.091620 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.094502 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.104581 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.186726 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.186879 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.186911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187071 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187194 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.187218 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.188471 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.189018 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.192098 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.201550 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.204512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.210547 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.216089 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"nova-cell1-novncproxy-0\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.220364 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310751 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310803 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310895 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310927 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.310963 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311000 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311035 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311050 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311081 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311116 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311133 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311169 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.311189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.322665 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.344728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.359472 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.360305 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"nova-api-0\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447566 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447645 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447678 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447705 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447748 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447779 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447807 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.447881 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.457111 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.466379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.466987 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.471690 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.471847 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.471938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.475365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.488318 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"nova-metadata-0\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.504094 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"dnsmasq-dns-cb7b48dc-fv895\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.553141 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.639308 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.662842 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:38 crc kubenswrapper[5136]: I0320 08:52:38.929222 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.043567 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.044969 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb609af52_e8bb_4279_b472_39d6e572932e.slice/crio-53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6 WatchSource:0}: Error finding container 53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6: Status 404 returned error can't find the container with id 53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.053512 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.106016 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.107359 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.113006 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.113385 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.118789 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.167642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.168079 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.168166 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.168243 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.189562 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09516972_60d9_4cd7_96c6_adf48041a2bb.slice/crio-c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25 WatchSource:0}: Error finding container c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25: Status 404 returned error can't find the container with id c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.194131 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.270593 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.270758 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.270936 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.271119 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.276637 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.285651 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.289658 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.295911 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"nova-cell1-conductor-db-sync-bknwr\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.304917 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.309654 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ceb13df_eb0b_4512_aabe_6be6a1ee8631.slice/crio-1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9 WatchSource:0}: Error finding container 1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9: Status 404 returned error can't find the container with id 1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.321255 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:52:39 crc kubenswrapper[5136]: W0320 08:52:39.331526 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda53d28b6_bc47_4aa3_a413_3716651dc331.slice/crio-3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299 WatchSource:0}: Error finding container 3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299: Status 404 returned error can't find the container with id 3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.363145 5136 scope.go:117] "RemoveContainer" containerID="ee7fc0aa7d70c450967fddf706c56fe4af54a2ede94af9ae1aa1f75f2c772efc" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.469982 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.843627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerStarted","Data":"ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.843956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerStarted","Data":"5c9a7d2b6bd777c84e35c9869b44075a2797d9ec171923d54abd4043df22cad2"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.850565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerStarted","Data":"c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.852862 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerStarted","Data":"53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.854794 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerStarted","Data":"62d977f140d52d185ad8e335d0e34478d3fe4528e116c9595298eb659df62cab"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.856947 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerStarted","Data":"1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.859300 5136 generic.go:334] "Generic (PLEG): container finished" podID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" exitCode=0 Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.859352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerDied","Data":"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.859380 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerStarted","Data":"3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299"} Mar 20 08:52:39 crc kubenswrapper[5136]: I0320 08:52:39.865929 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-mdczc" podStartSLOduration=2.865906832 podStartE2EDuration="2.865906832s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:39.861705651 +0000 UTC m=+7392.121016822" watchObservedRunningTime="2026-03-20 08:52:39.865906832 +0000 UTC m=+7392.125217983" Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.039882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 08:52:40 crc kubenswrapper[5136]: W0320 08:52:40.052512 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc10383e2_004c_458c_922b_dd13574f12ff.slice/crio-4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649 WatchSource:0}: Error finding container 4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649: Status 404 returned error can't find the container with id 4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649 Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.869406 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerStarted","Data":"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d"} Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.872417 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.877684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerStarted","Data":"036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2"} Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.877720 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerStarted","Data":"4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649"} Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.908504 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" podStartSLOduration=2.9084849200000003 podStartE2EDuration="2.90848492s" podCreationTimestamp="2026-03-20 08:52:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:40.894979362 +0000 UTC m=+7393.154290513" watchObservedRunningTime="2026-03-20 08:52:40.90848492 +0000 UTC m=+7393.167796071" Mar 20 08:52:40 crc kubenswrapper[5136]: I0320 08:52:40.917193 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-bknwr" podStartSLOduration=1.917172329 podStartE2EDuration="1.917172329s" podCreationTimestamp="2026-03-20 08:52:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:40.908911163 +0000 UTC m=+7393.168222314" watchObservedRunningTime="2026-03-20 08:52:40.917172329 +0000 UTC m=+7393.176483500" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.287667 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.324675 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.893956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerStarted","Data":"f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.894000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerStarted","Data":"429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.894069 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" containerID="cri-o://f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de" gracePeriod=30 Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.894045 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" containerID="cri-o://429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5" gracePeriod=30 Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.897123 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerStarted","Data":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.897172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerStarted","Data":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.900446 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerStarted","Data":"997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.900601 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82" gracePeriod=30 Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.904156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerStarted","Data":"332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51"} Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.917423 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.27445702 podStartE2EDuration="4.917406417s" podCreationTimestamp="2026-03-20 08:52:38 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.31612679 +0000 UTC m=+7391.575437941" lastFinishedPulling="2026-03-20 08:52:41.959076187 +0000 UTC m=+7394.218387338" observedRunningTime="2026-03-20 08:52:42.910368939 +0000 UTC m=+7395.169680110" watchObservedRunningTime="2026-03-20 08:52:42.917406417 +0000 UTC m=+7395.176717568" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.932727 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.035846693 podStartE2EDuration="5.932707481s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.051209108 +0000 UTC m=+7391.310520259" lastFinishedPulling="2026-03-20 08:52:41.948069906 +0000 UTC m=+7394.207381047" observedRunningTime="2026-03-20 08:52:42.928269484 +0000 UTC m=+7395.187580645" watchObservedRunningTime="2026-03-20 08:52:42.932707481 +0000 UTC m=+7395.192018642" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.956435 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.199406427 podStartE2EDuration="5.956417125s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.191313716 +0000 UTC m=+7391.450624867" lastFinishedPulling="2026-03-20 08:52:41.948324414 +0000 UTC m=+7394.207635565" observedRunningTime="2026-03-20 08:52:42.945083214 +0000 UTC m=+7395.204394365" watchObservedRunningTime="2026-03-20 08:52:42.956417125 +0000 UTC m=+7395.215728276" Mar 20 08:52:42 crc kubenswrapper[5136]: I0320 08:52:42.965396 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.070733054 podStartE2EDuration="5.965375873s" podCreationTimestamp="2026-03-20 08:52:37 +0000 UTC" firstStartedPulling="2026-03-20 08:52:39.052600401 +0000 UTC m=+7391.311911552" lastFinishedPulling="2026-03-20 08:52:41.94724322 +0000 UTC m=+7394.206554371" observedRunningTime="2026-03-20 08:52:42.963339009 +0000 UTC m=+7395.222650160" watchObservedRunningTime="2026-03-20 08:52:42.965375873 +0000 UTC m=+7395.224687024" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.189431 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.221561 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913681 5136 generic.go:334] "Generic (PLEG): container finished" podID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerID="f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de" exitCode=0 Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913714 5136 generic.go:334] "Generic (PLEG): container finished" podID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerID="429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5" exitCode=143 Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913781 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerDied","Data":"f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913878 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerDied","Data":"429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913889 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3ceb13df-eb0b-4512-aabe-6be6a1ee8631","Type":"ContainerDied","Data":"1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.913925 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1559c8d52bc066bd66c6e99fda08a9501f98e03f4428aa6736d7dfcf34cf93c9" Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.915337 5136 generic.go:334] "Generic (PLEG): container finished" podID="c10383e2-004c-458c-922b-dd13574f12ff" containerID="036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2" exitCode=0 Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.915366 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerDied","Data":"036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2"} Mar 20 08:52:43 crc kubenswrapper[5136]: I0320 08:52:43.947514 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071134 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071549 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071625 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.071703 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") pod \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\" (UID: \"3ceb13df-eb0b-4512-aabe-6be6a1ee8631\") " Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.072322 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs" (OuterVolumeSpecName: "logs") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.092240 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d" (OuterVolumeSpecName: "kube-api-access-dls8d") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "kube-api-access-dls8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.095852 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data" (OuterVolumeSpecName: "config-data") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.106558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ceb13df-eb0b-4512-aabe-6be6a1ee8631" (UID: "3ceb13df-eb0b-4512-aabe-6be6a1ee8631"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173548 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173754 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dls8d\" (UniqueName: \"kubernetes.io/projected/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-kube-api-access-dls8d\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173769 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.173779 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ceb13df-eb0b-4512-aabe-6be6a1ee8631-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.927921 5136 generic.go:334] "Generic (PLEG): container finished" podID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerID="ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f" exitCode=0 Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.928022 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.927975 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerDied","Data":"ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f"} Mar 20 08:52:44 crc kubenswrapper[5136]: I0320 08:52:44.990278 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.001444 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.012115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: E0320 08:52:45.012775 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.012806 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" Mar 20 08:52:45 crc kubenswrapper[5136]: E0320 08:52:45.013044 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.013065 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.013330 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-metadata" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.013375 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" containerName="nova-metadata-log" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.015006 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.024806 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.025090 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.035943 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.193545 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.193947 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.194131 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.194177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.194212 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.286285 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.295331 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.295631 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296193 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296344 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.296775 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.317735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.318537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.319752 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.329007 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.350074 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.402914 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.403069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.403110 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.403185 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") pod \"c10383e2-004c-458c-922b-dd13574f12ff\" (UID: \"c10383e2-004c-458c-922b-dd13574f12ff\") " Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.407610 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts" (OuterVolumeSpecName: "scripts") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.408306 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8" (OuterVolumeSpecName: "kube-api-access-nsjd8") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "kube-api-access-nsjd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.428355 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.432080 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data" (OuterVolumeSpecName: "config-data") pod "c10383e2-004c-458c-922b-dd13574f12ff" (UID: "c10383e2-004c-458c-922b-dd13574f12ff"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505292 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nsjd8\" (UniqueName: \"kubernetes.io/projected/c10383e2-004c-458c-922b-dd13574f12ff-kube-api-access-nsjd8\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505639 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505652 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.505663 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10383e2-004c-458c-922b-dd13574f12ff-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:45 crc kubenswrapper[5136]: W0320 08:52:45.777832 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6713f83b_29eb_4f81_a24c_fbc604bce554.slice/crio-7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee WatchSource:0}: Error finding container 7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee: Status 404 returned error can't find the container with id 7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.782359 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.951522 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerStarted","Data":"7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee"} Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.959863 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-bknwr" Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.962673 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-bknwr" event={"ID":"c10383e2-004c-458c-922b-dd13574f12ff","Type":"ContainerDied","Data":"4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649"} Mar 20 08:52:45 crc kubenswrapper[5136]: I0320 08:52:45.962725 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b7b62f31742d312679da735171fe1e4fe4477875a89de95b7eecb334658b649" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.030527 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 08:52:46 crc kubenswrapper[5136]: E0320 08:52:46.031027 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10383e2-004c-458c-922b-dd13574f12ff" containerName="nova-cell1-conductor-db-sync" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.031050 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10383e2-004c-458c-922b-dd13574f12ff" containerName="nova-cell1-conductor-db-sync" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.031268 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10383e2-004c-458c-922b-dd13574f12ff" containerName="nova-cell1-conductor-db-sync" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.032324 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.036459 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.053355 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.139479 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.139554 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.139622 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.241210 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.241312 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.241374 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.244898 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.251534 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.256207 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"nova-cell1-conductor-0\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.344037 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.360032 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.409884 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ceb13df-eb0b-4512-aabe-6be6a1ee8631" path="/var/lib/kubelet/pods/3ceb13df-eb0b-4512-aabe-6be6a1ee8631/volumes" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443285 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443649 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443708 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.443797 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") pod \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\" (UID: \"0869b44d-0a1b-47ae-9836-8940a31bfcf3\") " Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.448322 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn" (OuterVolumeSpecName: "kube-api-access-f5phn") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "kube-api-access-f5phn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.452025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts" (OuterVolumeSpecName: "scripts") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.476387 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data" (OuterVolumeSpecName: "config-data") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.481968 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0869b44d-0a1b-47ae-9836-8940a31bfcf3" (UID: "0869b44d-0a1b-47ae-9836-8940a31bfcf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546369 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546406 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546419 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0869b44d-0a1b-47ae-9836-8940a31bfcf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.546433 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5phn\" (UniqueName: \"kubernetes.io/projected/0869b44d-0a1b-47ae-9836-8940a31bfcf3-kube-api-access-f5phn\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.823491 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 08:52:46 crc kubenswrapper[5136]: W0320 08:52:46.825654 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea7881c5_b719_41b0_8046_249f7fdb6f61.slice/crio-0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567 WatchSource:0}: Error finding container 0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567: Status 404 returned error can't find the container with id 0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567 Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.979065 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-mdczc" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.979564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-mdczc" event={"ID":"0869b44d-0a1b-47ae-9836-8940a31bfcf3","Type":"ContainerDied","Data":"5c9a7d2b6bd777c84e35c9869b44075a2797d9ec171923d54abd4043df22cad2"} Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.980574 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9a7d2b6bd777c84e35c9869b44075a2797d9ec171923d54abd4043df22cad2" Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.990965 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerStarted","Data":"0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567"} Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.998546 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerStarted","Data":"d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453"} Mar 20 08:52:46 crc kubenswrapper[5136]: I0320 08:52:46.998591 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerStarted","Data":"96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea"} Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.029839 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.029804968 podStartE2EDuration="3.029804968s" podCreationTimestamp="2026-03-20 08:52:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:47.018468297 +0000 UTC m=+7399.277779468" watchObservedRunningTime="2026-03-20 08:52:47.029804968 +0000 UTC m=+7399.289116119" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.155783 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.156008 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" containerID="cri-o://42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" gracePeriod=30 Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.156197 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" containerID="cri-o://1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" gracePeriod=30 Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.169618 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.169869 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" containerID="cri-o://332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51" gracePeriod=30 Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.181450 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.742808 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776600 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776715 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776828 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.776905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") pod \"09516972-60d9-4cd7-96c6-adf48041a2bb\" (UID: \"09516972-60d9-4cd7-96c6-adf48041a2bb\") " Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.777437 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs" (OuterVolumeSpecName: "logs") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.782759 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx" (OuterVolumeSpecName: "kube-api-access-w25hx") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "kube-api-access-w25hx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.801366 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.802103 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data" (OuterVolumeSpecName: "config-data") pod "09516972-60d9-4cd7-96c6-adf48041a2bb" (UID: "09516972-60d9-4cd7-96c6-adf48041a2bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879206 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879894 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w25hx\" (UniqueName: \"kubernetes.io/projected/09516972-60d9-4cd7-96c6-adf48041a2bb-kube-api-access-w25hx\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879937 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/09516972-60d9-4cd7-96c6-adf48041a2bb-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:47 crc kubenswrapper[5136]: I0320 08:52:47.879949 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/09516972-60d9-4cd7-96c6-adf48041a2bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.008174 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerStarted","Data":"5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.008304 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010301 5136 generic.go:334] "Generic (PLEG): container finished" podID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" exitCode=0 Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010349 5136 generic.go:334] "Generic (PLEG): container finished" podID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" exitCode=143 Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010882 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010942 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerDied","Data":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010972 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerDied","Data":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.010987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"09516972-60d9-4cd7-96c6-adf48041a2bb","Type":"ContainerDied","Data":"c13ae57bbf638b75ff104c2d84bc60b50a61972caea54b71ddd8d960d7e36e25"} Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.011011 5136 scope.go:117] "RemoveContainer" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.026173 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=3.026153225 podStartE2EDuration="3.026153225s" podCreationTimestamp="2026-03-20 08:52:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:48.022735049 +0000 UTC m=+7400.282046200" watchObservedRunningTime="2026-03-20 08:52:48.026153225 +0000 UTC m=+7400.285464376" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.040306 5136 scope.go:117] "RemoveContainer" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.060802 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.064499 5136 scope.go:117] "RemoveContainer" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.065748 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": container with ID starting with 1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279 not found: ID does not exist" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.065784 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} err="failed to get container status \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": rpc error: code = NotFound desc = could not find container \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": container with ID starting with 1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.065825 5136 scope.go:117] "RemoveContainer" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.066034 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": container with ID starting with 42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82 not found: ID does not exist" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066057 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} err="failed to get container status \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": rpc error: code = NotFound desc = could not find container \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": container with ID starting with 42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066075 5136 scope.go:117] "RemoveContainer" containerID="1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066261 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279"} err="failed to get container status \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": rpc error: code = NotFound desc = could not find container \"1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279\": container with ID starting with 1192414b1f58e343b914750f87d65b348a41f28cd23ec6e0c225ef92b8705279 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066285 5136 scope.go:117] "RemoveContainer" containerID="42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066469 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82"} err="failed to get container status \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": rpc error: code = NotFound desc = could not find container \"42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82\": container with ID starting with 42b7517667261b98590f0704071509c654a61c43bb1117f0b400191c06192f82 not found: ID does not exist" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.066822 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.086138 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.087567 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.087735 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.087861 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.087992 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" Mar 20 08:52:48 crc kubenswrapper[5136]: E0320 08:52:48.088142 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerName="nova-manage" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.088234 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerName="nova-manage" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.088998 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-api" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.089112 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" containerName="nova-api-log" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.089214 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" containerName="nova-manage" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.091274 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.104230 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.132625 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187596 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187910 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187933 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.187968 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290337 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290397 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.290487 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.291236 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.297239 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.298785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.307281 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"nova-api-0\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.412335 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09516972-60d9-4cd7-96c6-adf48041a2bb" path="/var/lib/kubelet/pods/09516972-60d9-4cd7-96c6-adf48041a2bb/volumes" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.438967 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.664931 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.735356 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.735646 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" containerID="cri-o://11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d" gracePeriod=10 Mar 20 08:52:48 crc kubenswrapper[5136]: I0320 08:52:48.883785 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:52:48 crc kubenswrapper[5136]: W0320 08:52:48.884367 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod579d7134_2752_49f9_b511_ec4c1c43e855.slice/crio-cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e WatchSource:0}: Error finding container cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e: Status 404 returned error can't find the container with id cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.025062 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerStarted","Data":"cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e"} Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030269 5136 generic.go:334] "Generic (PLEG): container finished" podID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerID="11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d" exitCode=0 Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030352 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerDied","Data":"11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d"} Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030483 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" containerID="cri-o://d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453" gracePeriod=30 Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.030585 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" containerID="cri-o://96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea" gracePeriod=30 Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.335776 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528254 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528316 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528367 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.528890 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") pod \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\" (UID: \"2902cdfa-3695-49ec-a36d-73082b9aa5a5\") " Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.550114 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd" (OuterVolumeSpecName: "kube-api-access-mmftd") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "kube-api-access-mmftd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.631707 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmftd\" (UniqueName: \"kubernetes.io/projected/2902cdfa-3695-49ec-a36d-73082b9aa5a5-kube-api-access-mmftd\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.682758 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.687069 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.704630 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.709190 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config" (OuterVolumeSpecName: "config") pod "2902cdfa-3695-49ec-a36d-73082b9aa5a5" (UID: "2902cdfa-3695-49ec-a36d-73082b9aa5a5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733016 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733045 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733055 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:49 crc kubenswrapper[5136]: I0320 08:52:49.733065 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2902cdfa-3695-49ec-a36d-73082b9aa5a5-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040628 5136 generic.go:334] "Generic (PLEG): container finished" podID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerID="96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea" exitCode=0 Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040662 5136 generic.go:334] "Generic (PLEG): container finished" podID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerID="d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453" exitCode=143 Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040672 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerDied","Data":"96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.040703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerDied","Data":"d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.042292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerStarted","Data":"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.042324 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerStarted","Data":"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.043599 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" event={"ID":"2902cdfa-3695-49ec-a36d-73082b9aa5a5","Type":"ContainerDied","Data":"3724ba25b3c3d5b60071b8d78e6fb6e8e43e3c7f75f11f016def345af42800c4"} Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.043634 5136 scope.go:117] "RemoveContainer" containerID="11cce7a508814881b536262c59bb79c78ef540e64c7ff86205cc1f7942262b6d" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.043749 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.059170 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.065405 5136 scope.go:117] "RemoveContainer" containerID="174d06d5a4cd8a3ee5fe8c3756254a01a6a8554baf9bae2be57775301d65bd05" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.073301 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.073283275 podStartE2EDuration="2.073283275s" podCreationTimestamp="2026-03-20 08:52:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:50.068237388 +0000 UTC m=+7402.327548549" watchObservedRunningTime="2026-03-20 08:52:50.073283275 +0000 UTC m=+7402.332594426" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.123710 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.131885 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cdd4cf5b7-8vjw2"] Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150491 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150644 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150668 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150729 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") pod \"6713f83b-29eb-4f81-a24c-fbc604bce554\" (UID: \"6713f83b-29eb-4f81-a24c-fbc604bce554\") " Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.150791 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs" (OuterVolumeSpecName: "logs") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.151165 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6713f83b-29eb-4f81-a24c-fbc604bce554-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.156376 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w" (OuterVolumeSpecName: "kube-api-access-swm2w") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "kube-api-access-swm2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.174583 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data" (OuterVolumeSpecName: "config-data") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.183324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.217024 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "6713f83b-29eb-4f81-a24c-fbc604bce554" (UID: "6713f83b-29eb-4f81-a24c-fbc604bce554"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251919 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251955 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/6713f83b-29eb-4f81-a24c-fbc604bce554-kube-api-access-swm2w\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251966 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.251975 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/6713f83b-29eb-4f81-a24c-fbc604bce554-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:52:50 crc kubenswrapper[5136]: I0320 08:52:50.407126 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" path="/var/lib/kubelet/pods/2902cdfa-3695-49ec-a36d-73082b9aa5a5/volumes" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.053369 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"6713f83b-29eb-4f81-a24c-fbc604bce554","Type":"ContainerDied","Data":"7c5086e883b6c4d88baa490a424515f21d222c4b5586728c1daed8a11d7670ee"} Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.053396 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.053436 5136 scope.go:117] "RemoveContainer" containerID="96278179e56c5d2348b8c914cce3f120d3445bc4fb5c442e6054e1df59c3b3ea" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.153603 5136 scope.go:117] "RemoveContainer" containerID="d2b7b8267a2421e84ce96b443fe40959edd3068e2febff192b19ffb01873d453" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.157587 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.177300 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188432 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188804 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188832 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188846 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188852 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188866 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188872 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" Mar 20 08:52:51 crc kubenswrapper[5136]: E0320 08:52:51.188886 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="init" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.188892 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="init" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189046 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2902cdfa-3695-49ec-a36d-73082b9aa5a5" containerName="dnsmasq-dns" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189055 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-log" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189077 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" containerName="nova-metadata-metadata" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.189962 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.203320 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.210684 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.223483 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.371653 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372448 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372501 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372552 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.372610 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.387772 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475669 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475840 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475912 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.475951 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.476029 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.477502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.480462 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.481411 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.491730 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.494484 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"nova-metadata-0\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.512741 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:52:51 crc kubenswrapper[5136]: I0320 08:52:51.946791 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:52:52 crc kubenswrapper[5136]: I0320 08:52:52.064613 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerStarted","Data":"7d0527f6839d134920c7511c9424a689979b1e2fa3991597ecf809ce71d7c929"} Mar 20 08:52:52 crc kubenswrapper[5136]: I0320 08:52:52.407474 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6713f83b-29eb-4f81-a24c-fbc604bce554" path="/var/lib/kubelet/pods/6713f83b-29eb-4f81-a24c-fbc604bce554/volumes" Mar 20 08:52:53 crc kubenswrapper[5136]: I0320 08:52:53.077765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerStarted","Data":"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f"} Mar 20 08:52:53 crc kubenswrapper[5136]: I0320 08:52:53.078284 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerStarted","Data":"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229"} Mar 20 08:52:53 crc kubenswrapper[5136]: I0320 08:52:53.102282 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.102266773 podStartE2EDuration="2.102266773s" podCreationTimestamp="2026-03-20 08:52:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:52:53.101743717 +0000 UTC m=+7405.361054878" watchObservedRunningTime="2026-03-20 08:52:53.102266773 +0000 UTC m=+7405.361577924" Mar 20 08:52:58 crc kubenswrapper[5136]: I0320 08:52:58.440386 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:52:58 crc kubenswrapper[5136]: I0320 08:52:58.441026 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:52:59 crc kubenswrapper[5136]: I0320 08:52:59.523055 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:52:59 crc kubenswrapper[5136]: I0320 08:52:59.523059 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:01 crc kubenswrapper[5136]: I0320 08:53:01.513465 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:01 crc kubenswrapper[5136]: I0320 08:53:01.513840 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:02 crc kubenswrapper[5136]: I0320 08:53:02.528028 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:02 crc kubenswrapper[5136]: I0320 08:53:02.528076 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:06 crc kubenswrapper[5136]: I0320 08:53:06.439656 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:53:06 crc kubenswrapper[5136]: I0320 08:53:06.440120 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.513753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.514039 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.522990 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:09 crc kubenswrapper[5136]: I0320 08:53:09.523038 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:12 crc kubenswrapper[5136]: I0320 08:53:12.522939 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:12 crc kubenswrapper[5136]: I0320 08:53:12.522939 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:13 crc kubenswrapper[5136]: E0320 08:53:13.031973 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb609af52_e8bb_4279_b472_39d6e572932e.slice/crio-conmon-997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb609af52_e8bb_4279_b472_39d6e572932e.slice/crio-997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271273 5136 generic.go:334] "Generic (PLEG): container finished" podID="b609af52-e8bb-4279-b472-39d6e572932e" containerID="997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82" exitCode=137 Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271476 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerDied","Data":"997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82"} Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b609af52-e8bb-4279-b472-39d6e572932e","Type":"ContainerDied","Data":"53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6"} Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.271642 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53b67fe2d8796bd5fc35565ee671e2c0b2d50b59df54ce42cd5329a782fb1ab6" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.309189 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.433544 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") pod \"b609af52-e8bb-4279-b472-39d6e572932e\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.433619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") pod \"b609af52-e8bb-4279-b472-39d6e572932e\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.433787 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") pod \"b609af52-e8bb-4279-b472-39d6e572932e\" (UID: \"b609af52-e8bb-4279-b472-39d6e572932e\") " Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.441023 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt" (OuterVolumeSpecName: "kube-api-access-bw4wt") pod "b609af52-e8bb-4279-b472-39d6e572932e" (UID: "b609af52-e8bb-4279-b472-39d6e572932e"). InnerVolumeSpecName "kube-api-access-bw4wt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.459621 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b609af52-e8bb-4279-b472-39d6e572932e" (UID: "b609af52-e8bb-4279-b472-39d6e572932e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.464362 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data" (OuterVolumeSpecName: "config-data") pod "b609af52-e8bb-4279-b472-39d6e572932e" (UID: "b609af52-e8bb-4279-b472-39d6e572932e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.539275 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.539312 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b609af52-e8bb-4279-b472-39d6e572932e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:13 crc kubenswrapper[5136]: I0320 08:53:13.539324 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bw4wt\" (UniqueName: \"kubernetes.io/projected/b609af52-e8bb-4279-b472-39d6e572932e-kube-api-access-bw4wt\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.279477 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.322457 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.331161 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.356752 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: E0320 08:53:14.357214 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.357239 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.358804 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b609af52-e8bb-4279-b472-39d6e572932e" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.359591 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.364996 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.367594 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.368291 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.381673 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.412766 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b609af52-e8bb-4279-b472-39d6e572932e" path="/var/lib/kubelet/pods/b609af52-e8bb-4279-b472-39d6e572932e/volumes" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455031 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455083 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455791 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.455970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.456082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.557969 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558025 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558057 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558148 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.558183 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.563242 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.564323 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.565181 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.565963 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.577762 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"nova-cell1-novncproxy-0\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:14 crc kubenswrapper[5136]: I0320 08:53:14.686018 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:15 crc kubenswrapper[5136]: I0320 08:53:15.156351 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 08:53:15 crc kubenswrapper[5136]: I0320 08:53:15.287300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerStarted","Data":"80db3f58b1ebfbb9a5e2a7946e85f8a9a484a8c2e28fb3c3b16dbcc6876113ea"} Mar 20 08:53:16 crc kubenswrapper[5136]: I0320 08:53:16.296035 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerStarted","Data":"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961"} Mar 20 08:53:16 crc kubenswrapper[5136]: I0320 08:53:16.316873 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.316848834 podStartE2EDuration="2.316848834s" podCreationTimestamp="2026-03-20 08:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:16.310703114 +0000 UTC m=+7428.570014255" watchObservedRunningTime="2026-03-20 08:53:16.316848834 +0000 UTC m=+7428.576159985" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.310743 5136 generic.go:334] "Generic (PLEG): container finished" podID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerID="332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51" exitCode=137 Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.310857 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerDied","Data":"332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51"} Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.601774 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.707189 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") pod \"55055431-47a0-4022-a32a-5b2b1ef303ac\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.707467 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") pod \"55055431-47a0-4022-a32a-5b2b1ef303ac\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.707578 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") pod \"55055431-47a0-4022-a32a-5b2b1ef303ac\" (UID: \"55055431-47a0-4022-a32a-5b2b1ef303ac\") " Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.712031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s" (OuterVolumeSpecName: "kube-api-access-t9f4s") pod "55055431-47a0-4022-a32a-5b2b1ef303ac" (UID: "55055431-47a0-4022-a32a-5b2b1ef303ac"). InnerVolumeSpecName "kube-api-access-t9f4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.751233 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55055431-47a0-4022-a32a-5b2b1ef303ac" (UID: "55055431-47a0-4022-a32a-5b2b1ef303ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.759644 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data" (OuterVolumeSpecName: "config-data") pod "55055431-47a0-4022-a32a-5b2b1ef303ac" (UID: "55055431-47a0-4022-a32a-5b2b1ef303ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.809886 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.809938 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55055431-47a0-4022-a32a-5b2b1ef303ac-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:17 crc kubenswrapper[5136]: I0320 08:53:17.809956 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9f4s\" (UniqueName: \"kubernetes.io/projected/55055431-47a0-4022-a32a-5b2b1ef303ac-kube-api-access-t9f4s\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.329878 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"55055431-47a0-4022-a32a-5b2b1ef303ac","Type":"ContainerDied","Data":"62d977f140d52d185ad8e335d0e34478d3fe4528e116c9595298eb659df62cab"} Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.330129 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.330440 5136 scope.go:117] "RemoveContainer" containerID="332a221e0ff579954272eaf6146f26f2dac553c0d82d235c294584854974af51" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.368896 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.391205 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.416503 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" path="/var/lib/kubelet/pods/55055431-47a0-4022-a32a-5b2b1ef303ac/volumes" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.417265 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: E0320 08:53:18.417618 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.417637 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.417861 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="55055431-47a0-4022-a32a-5b2b1ef303ac" containerName="nova-scheduler-scheduler" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.418547 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.418640 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.423126 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.429629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.429725 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.531302 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.532078 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.532271 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.535191 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.543353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.635140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.651250 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"nova-scheduler-0\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " pod="openstack/nova-scheduler-0" Mar 20 08:53:18 crc kubenswrapper[5136]: I0320 08:53:18.744476 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.164241 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:19 crc kubenswrapper[5136]: W0320 08:53:19.167412 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8fbe2855_6fbb_40f0_bea7_43b853e673ba.slice/crio-06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5 WatchSource:0}: Error finding container 06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5: Status 404 returned error can't find the container with id 06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5 Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.338032 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerStarted","Data":"06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5"} Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.485030 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.485378 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:19 crc kubenswrapper[5136]: I0320 08:53:19.686185 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:20 crc kubenswrapper[5136]: I0320 08:53:20.350255 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerStarted","Data":"49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4"} Mar 20 08:53:20 crc kubenswrapper[5136]: I0320 08:53:20.372718 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.372695622 podStartE2EDuration="2.372695622s" podCreationTimestamp="2026-03-20 08:53:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:20.363115725 +0000 UTC m=+7432.622426876" watchObservedRunningTime="2026-03-20 08:53:20.372695622 +0000 UTC m=+7432.632006783" Mar 20 08:53:22 crc kubenswrapper[5136]: I0320 08:53:22.522960 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:22 crc kubenswrapper[5136]: I0320 08:53:22.522990 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:23 crc kubenswrapper[5136]: I0320 08:53:23.745491 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 08:53:24 crc kubenswrapper[5136]: I0320 08:53:24.687317 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:24 crc kubenswrapper[5136]: I0320 08:53:24.705683 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.416284 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.563966 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.565069 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.567277 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.567346 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.577788 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.582295 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.582344 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.582699 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.583062 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.684652 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.685158 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.685243 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.685270 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.691388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.704801 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.704932 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.706353 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"nova-cell1-cell-mapping-dqx9d\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:25 crc kubenswrapper[5136]: I0320 08:53:25.883394 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:26 crc kubenswrapper[5136]: I0320 08:53:26.361334 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 08:53:26 crc kubenswrapper[5136]: I0320 08:53:26.410210 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerStarted","Data":"d2ae94894fbea7420f6354e5d1a88eec38b5ad81604fa2329e2b4a94c19c8eee"} Mar 20 08:53:27 crc kubenswrapper[5136]: I0320 08:53:27.414570 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerStarted","Data":"be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70"} Mar 20 08:53:27 crc kubenswrapper[5136]: I0320 08:53:27.434413 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-dqx9d" podStartSLOduration=2.434392233 podStartE2EDuration="2.434392233s" podCreationTimestamp="2026-03-20 08:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:27.428611194 +0000 UTC m=+7439.687922335" watchObservedRunningTime="2026-03-20 08:53:27.434392233 +0000 UTC m=+7439.693703384" Mar 20 08:53:28 crc kubenswrapper[5136]: I0320 08:53:28.745095 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 08:53:28 crc kubenswrapper[5136]: I0320 08:53:28.775280 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 08:53:29 crc kubenswrapper[5136]: I0320 08:53:29.457470 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 08:53:29 crc kubenswrapper[5136]: I0320 08:53:29.522841 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:29 crc kubenswrapper[5136]: I0320 08:53:29.522900 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.138:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:31 crc kubenswrapper[5136]: I0320 08:53:31.450271 5136 generic.go:334] "Generic (PLEG): container finished" podID="db04162b-4913-4acc-b387-d7324202a05b" containerID="be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70" exitCode=0 Mar 20 08:53:31 crc kubenswrapper[5136]: I0320 08:53:31.450364 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerDied","Data":"be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70"} Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.522000 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.522764 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.139:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.783038 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916508 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916591 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916649 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.916719 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") pod \"db04162b-4913-4acc-b387-d7324202a05b\" (UID: \"db04162b-4913-4acc-b387-d7324202a05b\") " Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.921639 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts" (OuterVolumeSpecName: "scripts") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.921746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7" (OuterVolumeSpecName: "kube-api-access-6ngb7") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "kube-api-access-6ngb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.941937 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:32 crc kubenswrapper[5136]: I0320 08:53:32.945746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data" (OuterVolumeSpecName: "config-data") pod "db04162b-4913-4acc-b387-d7324202a05b" (UID: "db04162b-4913-4acc-b387-d7324202a05b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018462 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ngb7\" (UniqueName: \"kubernetes.io/projected/db04162b-4913-4acc-b387-d7324202a05b-kube-api-access-6ngb7\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018496 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018507 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.018516 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/db04162b-4913-4acc-b387-d7324202a05b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.468797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-dqx9d" event={"ID":"db04162b-4913-4acc-b387-d7324202a05b","Type":"ContainerDied","Data":"d2ae94894fbea7420f6354e5d1a88eec38b5ad81604fa2329e2b4a94c19c8eee"} Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.469129 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2ae94894fbea7420f6354e5d1a88eec38b5ad81604fa2329e2b4a94c19c8eee" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.468889 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-dqx9d" Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.650754 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.651646 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" containerID="cri-o://96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.652176 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" containerID="cri-o://32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.666616 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.666882 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" containerID="cri-o://49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.734685 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.734979 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" containerID="cri-o://f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: I0320 08:53:33.735118 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" containerID="cri-o://7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" gracePeriod=30 Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.747112 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.748607 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.749950 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:33 crc kubenswrapper[5136]: E0320 08:53:33.750010 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.477333 5136 generic.go:334] "Generic (PLEG): container finished" podID="579d7134-2752-49f9-b511-ec4c1c43e855" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" exitCode=143 Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.477402 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerDied","Data":"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0"} Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.479245 5136 generic.go:334] "Generic (PLEG): container finished" podID="b168d83c-bd4d-4187-915f-59b00d213a23" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" exitCode=143 Mar 20 08:53:34 crc kubenswrapper[5136]: I0320 08:53:34.479270 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerDied","Data":"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229"} Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.747240 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.749230 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.750380 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:38 crc kubenswrapper[5136]: E0320 08:53:38.750408 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.755978 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.757874 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.758992 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:43 crc kubenswrapper[5136]: E0320 08:53:43.759035 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.414104 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502722 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502772 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.502809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") pod \"579d7134-2752-49f9-b511-ec4c1c43e855\" (UID: \"579d7134-2752-49f9-b511-ec4c1c43e855\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.503459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs" (OuterVolumeSpecName: "logs") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.508333 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h" (OuterVolumeSpecName: "kube-api-access-8rb5h") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "kube-api-access-8rb5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.526853 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.533046 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.544185 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data" (OuterVolumeSpecName: "config-data") pod "579d7134-2752-49f9-b511-ec4c1c43e855" (UID: "579d7134-2752-49f9-b511-ec4c1c43e855"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595330 5136 generic.go:334] "Generic (PLEG): container finished" podID="b168d83c-bd4d-4187-915f-59b00d213a23" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" exitCode=0 Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595404 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595422 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerDied","Data":"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595473 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b168d83c-bd4d-4187-915f-59b00d213a23","Type":"ContainerDied","Data":"7d0527f6839d134920c7511c9424a689979b1e2fa3991597ecf809ce71d7c929"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.595515 5136 scope.go:117] "RemoveContainer" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598317 5136 generic.go:334] "Generic (PLEG): container finished" podID="579d7134-2752-49f9-b511-ec4c1c43e855" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" exitCode=0 Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598360 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerDied","Data":"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598392 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.598399 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"579d7134-2752-49f9-b511-ec4c1c43e855","Type":"ContainerDied","Data":"cf32985f1285ef1aad7b8fc54ab7da07f587890e27cb5380195236bfc72ef06e"} Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604557 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604585 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/579d7134-2752-49f9-b511-ec4c1c43e855-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604597 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/579d7134-2752-49f9-b511-ec4c1c43e855-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.604607 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rb5h\" (UniqueName: \"kubernetes.io/projected/579d7134-2752-49f9-b511-ec4c1c43e855-kube-api-access-8rb5h\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.617098 5136 scope.go:117] "RemoveContainer" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.635046 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.659474 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.660596 5136 scope.go:117] "RemoveContainer" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.662775 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f\": container with ID starting with 7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f not found: ID does not exist" containerID="7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.662826 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f"} err="failed to get container status \"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f\": rpc error: code = NotFound desc = could not find container \"7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f\": container with ID starting with 7d3830628da3a3cae106b1557fd6bb1ab85eca213d98f75642a61a34ff4dc19f not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.662849 5136 scope.go:117] "RemoveContainer" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.663185 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229\": container with ID starting with f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229 not found: ID does not exist" containerID="f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.663216 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229"} err="failed to get container status \"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229\": rpc error: code = NotFound desc = could not find container \"f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229\": container with ID starting with f98edba9123fda937d7ad4a60357772d3d424d59ab9f581f62837bcf3c5fd229 not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.663234 5136 scope.go:117] "RemoveContainer" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.669833 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670289 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db04162b-4913-4acc-b387-d7324202a05b" containerName="nova-manage" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670308 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="db04162b-4913-4acc-b387-d7324202a05b" containerName="nova-manage" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670322 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670329 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670344 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670349 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670360 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670366 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.670392 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670398 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670556 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-metadata" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670571 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670581 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="db04162b-4913-4acc-b387-d7324202a05b" containerName="nova-manage" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670593 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" containerName="nova-metadata-log" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.670602 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" containerName="nova-api-api" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.671566 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.678263 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.682539 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.694025 5136 scope.go:117] "RemoveContainer" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705230 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705274 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705316 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705394 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.705432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") pod \"b168d83c-bd4d-4187-915f-59b00d213a23\" (UID: \"b168d83c-bd4d-4187-915f-59b00d213a23\") " Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.706570 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs" (OuterVolumeSpecName: "logs") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.707460 5136 scope.go:117] "RemoveContainer" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.707968 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed\": container with ID starting with 32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed not found: ID does not exist" containerID="32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.708004 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed"} err="failed to get container status \"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed\": rpc error: code = NotFound desc = could not find container \"32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed\": container with ID starting with 32962fd8461e45f8def763e9f82cc8aede01d4f696222a24a4de4aaaec3639ed not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.708028 5136 scope.go:117] "RemoveContainer" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" Mar 20 08:53:47 crc kubenswrapper[5136]: E0320 08:53:47.708455 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0\": container with ID starting with 96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0 not found: ID does not exist" containerID="96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.708475 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0"} err="failed to get container status \"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0\": rpc error: code = NotFound desc = could not find container \"96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0\": container with ID starting with 96734219fb7514c4b99e79b7d0fa025cf134e8f0dbe5153409e89a642d0232e0 not found: ID does not exist" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.709172 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj" (OuterVolumeSpecName: "kube-api-access-ls6gj") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "kube-api-access-ls6gj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.726653 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.730187 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data" (OuterVolumeSpecName: "config-data") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807397 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807519 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807566 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807592 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807946 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807985 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls6gj\" (UniqueName: \"kubernetes.io/projected/b168d83c-bd4d-4187-915f-59b00d213a23-kube-api-access-ls6gj\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.807995 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.808005 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b168d83c-bd4d-4187-915f-59b00d213a23-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909597 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909842 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.909878 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.910311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.912798 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:47 crc kubenswrapper[5136]: I0320 08:53:47.913009 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.021150 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b168d83c-bd4d-4187-915f-59b00d213a23" (UID: "b168d83c-bd4d-4187-915f-59b00d213a23"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.024139 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"nova-api-0\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " pod="openstack/nova-api-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.114522 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b168d83c-bd4d-4187-915f-59b00d213a23-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.233862 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.244183 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.257015 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.258764 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.261763 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.262086 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.273630 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.297388 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.328582 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.328688 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.328990 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.329258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.329311 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.411484 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="579d7134-2752-49f9-b511-ec4c1c43e855" path="/var/lib/kubelet/pods/579d7134-2752-49f9-b511-ec4c1c43e855/volumes" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.412761 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b168d83c-bd4d-4187-915f-59b00d213a23" path="/var/lib/kubelet/pods/b168d83c-bd4d-4187-915f-59b00d213a23/volumes" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430572 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430623 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430763 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.430780 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.442576 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.443447 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.443482 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.444823 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.453455 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"nova-metadata-0\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.577074 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.746233 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.747831 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.749484 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:48 crc kubenswrapper[5136]: E0320 08:53:48.749522 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:48 crc kubenswrapper[5136]: I0320 08:53:48.804366 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.028950 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 08:53:49 crc kubenswrapper[5136]: W0320 08:53:49.035843 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfba581c3_e77a_4db7_ac50_bdb17291b2c7.slice/crio-a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f WatchSource:0}: Error finding container a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f: Status 404 returned error can't find the container with id a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.617960 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerStarted","Data":"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.618273 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerStarted","Data":"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.618292 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerStarted","Data":"68f8bcef42fc4d6d03adc36790caeff5f36c1bbf1af4d1021355e12bffa62849"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.623385 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerStarted","Data":"1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.623416 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerStarted","Data":"cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.623428 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerStarted","Data":"a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f"} Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.645590 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.645543489 podStartE2EDuration="2.645543489s" podCreationTimestamp="2026-03-20 08:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:49.634474006 +0000 UTC m=+7461.893785157" watchObservedRunningTime="2026-03-20 08:53:49.645543489 +0000 UTC m=+7461.904854640" Mar 20 08:53:49 crc kubenswrapper[5136]: I0320 08:53:49.688177 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=1.688154907 podStartE2EDuration="1.688154907s" podCreationTimestamp="2026-03-20 08:53:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:53:49.668694645 +0000 UTC m=+7461.928005816" watchObservedRunningTime="2026-03-20 08:53:49.688154907 +0000 UTC m=+7461.947466058" Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.746907 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.749236 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.750540 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:53 crc kubenswrapper[5136]: E0320 08:53:53.750573 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.297997 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.299504 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.578116 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:58 crc kubenswrapper[5136]: I0320 08:53:58.578705 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.746584 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.747916 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.749280 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:53:58 crc kubenswrapper[5136]: E0320 08:53:58.749346 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.381100 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.1.143:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.381122 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.1.143:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.590962 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.144:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:53:59 crc kubenswrapper[5136]: I0320 08:53:59.590973 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.144:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.163416 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.164483 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.168960 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.169109 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.174510 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.175183 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.264481 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"auto-csr-approver-29566614-wvfxr\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.366262 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"auto-csr-approver-29566614-wvfxr\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.391116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"auto-csr-approver-29566614-wvfxr\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.507503 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:00 crc kubenswrapper[5136]: I0320 08:54:00.972989 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 08:54:01 crc kubenswrapper[5136]: I0320 08:54:01.747020 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" event={"ID":"746f2ae5-dabf-431a-b344-011a75049862","Type":"ContainerStarted","Data":"ae22a92da0932b82609be837ba8aac293ea5e9383babbe89937642d7e6fa4ab1"} Mar 20 08:54:02 crc kubenswrapper[5136]: I0320 08:54:02.770160 5136 generic.go:334] "Generic (PLEG): container finished" podID="746f2ae5-dabf-431a-b344-011a75049862" containerID="b8341630a66939232813fa3ca2eab063f076fc3ac4ee1803ba6693cd8bb7a98d" exitCode=0 Mar 20 08:54:02 crc kubenswrapper[5136]: I0320 08:54:02.770243 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" event={"ID":"746f2ae5-dabf-431a-b344-011a75049862","Type":"ContainerDied","Data":"b8341630a66939232813fa3ca2eab063f076fc3ac4ee1803ba6693cd8bb7a98d"} Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.745965 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.746301 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.746649 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Mar 20 08:54:03 crc kubenswrapper[5136]: E0320 08:54:03.746684 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:03 crc kubenswrapper[5136]: I0320 08:54:03.809017 5136 generic.go:334] "Generic (PLEG): container finished" podID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" exitCode=137 Mar 20 08:54:03 crc kubenswrapper[5136]: I0320 08:54:03.811021 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerDied","Data":"49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4"} Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.047731 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.127286 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.141332 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") pod \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.141429 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") pod \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.141546 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") pod \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\" (UID: \"8fbe2855-6fbb-40f0-bea7-43b853e673ba\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.148218 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz" (OuterVolumeSpecName: "kube-api-access-cjjhz") pod "8fbe2855-6fbb-40f0-bea7-43b853e673ba" (UID: "8fbe2855-6fbb-40f0-bea7-43b853e673ba"). InnerVolumeSpecName "kube-api-access-cjjhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.182036 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data" (OuterVolumeSpecName: "config-data") pod "8fbe2855-6fbb-40f0-bea7-43b853e673ba" (UID: "8fbe2855-6fbb-40f0-bea7-43b853e673ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.188658 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8fbe2855-6fbb-40f0-bea7-43b853e673ba" (UID: "8fbe2855-6fbb-40f0-bea7-43b853e673ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.242946 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") pod \"746f2ae5-dabf-431a-b344-011a75049862\" (UID: \"746f2ae5-dabf-431a-b344-011a75049862\") " Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.243418 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjhz\" (UniqueName: \"kubernetes.io/projected/8fbe2855-6fbb-40f0-bea7-43b853e673ba-kube-api-access-cjjhz\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.243441 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.243454 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8fbe2855-6fbb-40f0-bea7-43b853e673ba-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.245966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh" (OuterVolumeSpecName: "kube-api-access-hjvkh") pod "746f2ae5-dabf-431a-b344-011a75049862" (UID: "746f2ae5-dabf-431a-b344-011a75049862"). InnerVolumeSpecName "kube-api-access-hjvkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.345507 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjvkh\" (UniqueName: \"kubernetes.io/projected/746f2ae5-dabf-431a-b344-011a75049862-kube-api-access-hjvkh\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.822444 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.822512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"8fbe2855-6fbb-40f0-bea7-43b853e673ba","Type":"ContainerDied","Data":"06eb0e932ffea6d92e18230014df407dbddf53d2394ca0952871e976aa85a7c5"} Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.822871 5136 scope.go:117] "RemoveContainer" containerID="49fc3c3e57eec9b1ea0e3d23b1e8c61575aee526a0aa9580c3f40aed335237b4" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.827008 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" event={"ID":"746f2ae5-dabf-431a-b344-011a75049862","Type":"ContainerDied","Data":"ae22a92da0932b82609be837ba8aac293ea5e9383babbe89937642d7e6fa4ab1"} Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.827039 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae22a92da0932b82609be837ba8aac293ea5e9383babbe89937642d7e6fa4ab1" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.827075 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566614-wvfxr" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.844790 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.853311 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.871960 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: E0320 08:54:04.872351 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872367 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:04 crc kubenswrapper[5136]: E0320 08:54:04.872395 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="746f2ae5-dabf-431a-b344-011a75049862" containerName="oc" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872402 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="746f2ae5-dabf-431a-b344-011a75049862" containerName="oc" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872563 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" containerName="nova-scheduler-scheduler" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.872584 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="746f2ae5-dabf-431a-b344-011a75049862" containerName="oc" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.873261 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.877144 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.883764 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.961206 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.961510 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:04 crc kubenswrapper[5136]: I0320 08:54:04.961751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.063421 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.063527 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.063631 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.076677 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.078585 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.083859 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"nova-scheduler-0\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.195731 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.202303 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.204030 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566608-tdwn4"] Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.457321 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 08:54:05 crc kubenswrapper[5136]: W0320 08:54:05.462954 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod41ed7c59_18ee_44ec_8068_ccc9e82485a6.slice/crio-78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5 WatchSource:0}: Error finding container 78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5: Status 404 returned error can't find the container with id 78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5 Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.836565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerStarted","Data":"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79"} Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.836946 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerStarted","Data":"78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5"} Mar 20 08:54:05 crc kubenswrapper[5136]: I0320 08:54:05.858507 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=1.858488904 podStartE2EDuration="1.858488904s" podCreationTimestamp="2026-03-20 08:54:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:05.852073616 +0000 UTC m=+7478.111384787" watchObservedRunningTime="2026-03-20 08:54:05.858488904 +0000 UTC m=+7478.117800055" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.298067 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.298142 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.427024 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fbe2855-6fbb-40f0-bea7-43b853e673ba" path="/var/lib/kubelet/pods/8fbe2855-6fbb-40f0-bea7-43b853e673ba/volumes" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.427850 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef82e0a5-a043-48d9-82d6-132dbf0e9b74" path="/var/lib/kubelet/pods/ef82e0a5-a043-48d9-82d6-132dbf0e9b74/volumes" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.577906 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:54:06 crc kubenswrapper[5136]: I0320 08:54:06.579064 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.335300 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.366229 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.369686 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.583902 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.584410 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.591302 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.880384 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:08 crc kubenswrapper[5136]: I0320 08:54:08.881615 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.051089 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.054616 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.090022 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148739 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148821 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148903 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.148926 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.149030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.250879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.250993 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.251080 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.251110 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.251168 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.252716 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.253477 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.253972 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.254795 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.287719 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"dnsmasq-dns-65bbbb4567-25rj9\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.378956 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:09 crc kubenswrapper[5136]: I0320 08:54:09.896586 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 08:54:09 crc kubenswrapper[5136]: W0320 08:54:09.900944 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf93080a1_9819_48ad_a84d_ddc2d6ffe5e6.slice/crio-8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6 WatchSource:0}: Error finding container 8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6: Status 404 returned error can't find the container with id 8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6 Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.202517 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.893248 5136 generic.go:334] "Generic (PLEG): container finished" podID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" exitCode=0 Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.893405 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerDied","Data":"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01"} Mar 20 08:54:10 crc kubenswrapper[5136]: I0320 08:54:10.893682 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerStarted","Data":"8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6"} Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.617387 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.903184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerStarted","Data":"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299"} Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.903375 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" containerID="cri-o://d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" gracePeriod=30 Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.903588 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" containerID="cri-o://0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" gracePeriod=30 Mar 20 08:54:11 crc kubenswrapper[5136]: I0320 08:54:11.930590 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" podStartSLOduration=2.930575048 podStartE2EDuration="2.930575048s" podCreationTimestamp="2026-03-20 08:54:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:11.923987194 +0000 UTC m=+7484.183298345" watchObservedRunningTime="2026-03-20 08:54:11.930575048 +0000 UTC m=+7484.189886199" Mar 20 08:54:12 crc kubenswrapper[5136]: I0320 08:54:12.913093 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" exitCode=143 Mar 20 08:54:12 crc kubenswrapper[5136]: I0320 08:54:12.913173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerDied","Data":"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b"} Mar 20 08:54:12 crc kubenswrapper[5136]: I0320 08:54:12.913791 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.202498 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.234013 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.822524 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.822579 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.911887 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938482 5136 generic.go:334] "Generic (PLEG): container finished" podID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" exitCode=0 Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938547 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938562 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerDied","Data":"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb"} Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"45d40dc0-780a-4792-bbbe-d8867e1b2749","Type":"ContainerDied","Data":"68f8bcef42fc4d6d03adc36790caeff5f36c1bbf1af4d1021355e12bffa62849"} Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.938657 5136 scope.go:117] "RemoveContainer" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.978484 5136 scope.go:117] "RemoveContainer" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.981785 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.981795 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.982367 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.982445 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.982639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") pod \"45d40dc0-780a-4792-bbbe-d8867e1b2749\" (UID: \"45d40dc0-780a-4792-bbbe-d8867e1b2749\") " Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.983519 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs" (OuterVolumeSpecName: "logs") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.984656 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/45d40dc0-780a-4792-bbbe-d8867e1b2749-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:15 crc kubenswrapper[5136]: I0320 08:54:15.991119 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2" (OuterVolumeSpecName: "kube-api-access-x56v2") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "kube-api-access-x56v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.026760 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.047661 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data" (OuterVolumeSpecName: "config-data") pod "45d40dc0-780a-4792-bbbe-d8867e1b2749" (UID: "45d40dc0-780a-4792-bbbe-d8867e1b2749"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.086608 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.086646 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d40dc0-780a-4792-bbbe-d8867e1b2749-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.086659 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x56v2\" (UniqueName: \"kubernetes.io/projected/45d40dc0-780a-4792-bbbe-d8867e1b2749-kube-api-access-x56v2\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.118109 5136 scope.go:117] "RemoveContainer" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.118768 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb\": container with ID starting with d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb not found: ID does not exist" containerID="d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.118953 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb"} err="failed to get container status \"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb\": rpc error: code = NotFound desc = could not find container \"d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb\": container with ID starting with d96cc1cad235578f9a36cbb8074bac1ad91922ab580810c458bfd971b92cdfcb not found: ID does not exist" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.119045 5136 scope.go:117] "RemoveContainer" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.119475 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b\": container with ID starting with 0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b not found: ID does not exist" containerID="0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.119516 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b"} err="failed to get container status \"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b\": rpc error: code = NotFound desc = could not find container \"0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b\": container with ID starting with 0d63faabf20df3828f5eaa82104cf523eed9c8efd7963ca916b2fdfc33a5b34b not found: ID does not exist" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.274260 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.290162 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.302313 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.302868 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.302888 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" Mar 20 08:54:16 crc kubenswrapper[5136]: E0320 08:54:16.302915 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.302924 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.303168 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-log" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.303199 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" containerName="nova-api-api" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.304251 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.306712 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.307097 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.307142 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.310245 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.392719 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393712 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393732 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.393981 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.394093 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.407635 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45d40dc0-780a-4792-bbbe-d8867e1b2749" path="/var/lib/kubelet/pods/45d40dc0-780a-4792-bbbe-d8867e1b2749/volumes" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496100 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496361 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496500 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496575 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496703 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496790 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.496530 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.501261 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.503028 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.503235 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.504793 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.516743 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"nova-api-0\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " pod="openstack/nova-api-0" Mar 20 08:54:16 crc kubenswrapper[5136]: I0320 08:54:16.636997 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.061572 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.957536 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerStarted","Data":"bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b"} Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.958172 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerStarted","Data":"69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d"} Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.958184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerStarted","Data":"0bd47d70acb181d12064205fc44377458fa88b9280bd44fea4624c5f756f1398"} Mar 20 08:54:17 crc kubenswrapper[5136]: I0320 08:54:17.985546 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.985509481 podStartE2EDuration="1.985509481s" podCreationTimestamp="2026-03-20 08:54:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:54:17.984579732 +0000 UTC m=+7490.243890913" watchObservedRunningTime="2026-03-20 08:54:17.985509481 +0000 UTC m=+7490.244820672" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.380758 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.444187 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.444428 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" containerID="cri-o://d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" gracePeriod=10 Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.927453 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976160 5136 generic.go:334] "Generic (PLEG): container finished" podID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" exitCode=0 Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976221 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976218 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerDied","Data":"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d"} Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976313 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-cb7b48dc-fv895" event={"ID":"a53d28b6-bc47-4aa3-a413-3716651dc331","Type":"ContainerDied","Data":"3a00b31f5b938544931b8ec8a179f9b8845385e169e4d748a404beb81b702299"} Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.976338 5136 scope.go:117] "RemoveContainer" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" Mar 20 08:54:19 crc kubenswrapper[5136]: I0320 08:54:19.997710 5136 scope.go:117] "RemoveContainer" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.037028 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.050651 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.055880 5136 scope.go:117] "RemoveContainer" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" Mar 20 08:54:20 crc kubenswrapper[5136]: E0320 08:54:20.058026 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d\": container with ID starting with d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d not found: ID does not exist" containerID="d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.058071 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d"} err="failed to get container status \"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d\": rpc error: code = NotFound desc = could not find container \"d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d\": container with ID starting with d132f78c826ee9737416e6e3fda2794793e3627aa028783f261201774977171d not found: ID does not exist" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.058109 5136 scope.go:117] "RemoveContainer" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" Mar 20 08:54:20 crc kubenswrapper[5136]: E0320 08:54:20.058471 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2\": container with ID starting with e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2 not found: ID does not exist" containerID="e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.058514 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2"} err="failed to get container status \"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2\": rpc error: code = NotFound desc = could not find container \"e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2\": container with ID starting with e990be7626e99b384aaa45d32667345fde2ab271f0835cd098e333ff70311fa2 not found: ID does not exist" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.062083 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-blnd4"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066152 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066284 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066326 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066432 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.066494 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") pod \"a53d28b6-bc47-4aa3-a413-3716651dc331\" (UID: \"a53d28b6-bc47-4aa3-a413-3716651dc331\") " Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.074037 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3614-account-create-update-dp5t6"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.088031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k" (OuterVolumeSpecName: "kube-api-access-7cn8k") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "kube-api-access-7cn8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.120148 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.121459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config" (OuterVolumeSpecName: "config") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.143352 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.154541 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a53d28b6-bc47-4aa3-a413-3716651dc331" (UID: "a53d28b6-bc47-4aa3-a413-3716651dc331"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168411 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168441 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168485 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168496 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cn8k\" (UniqueName: \"kubernetes.io/projected/a53d28b6-bc47-4aa3-a413-3716651dc331-kube-api-access-7cn8k\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.168505 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a53d28b6-bc47-4aa3-a413-3716651dc331-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.305354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.314626 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-cb7b48dc-fv895"] Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.406265 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0749652f-3995-4e34-ba17-55eac4c3530c" path="/var/lib/kubelet/pods/0749652f-3995-4e34-ba17-55eac4c3530c/volumes" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.406773 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fb13f3a-3785-4650-8381-e4d5e6fa7f73" path="/var/lib/kubelet/pods/0fb13f3a-3785-4650-8381-e4d5e6fa7f73/volumes" Mar 20 08:54:20 crc kubenswrapper[5136]: I0320 08:54:20.407448 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" path="/var/lib/kubelet/pods/a53d28b6-bc47-4aa3-a413-3716651dc331/volumes" Mar 20 08:54:26 crc kubenswrapper[5136]: I0320 08:54:26.638064 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:54:26 crc kubenswrapper[5136]: I0320 08:54:26.638631 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Mar 20 08:54:27 crc kubenswrapper[5136]: I0320 08:54:27.659015 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.148:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:54:27 crc kubenswrapper[5136]: I0320 08:54:27.659092 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.148:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 20 08:54:32 crc kubenswrapper[5136]: I0320 08:54:32.032479 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:54:32 crc kubenswrapper[5136]: I0320 08:54:32.041701 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-62shw"] Mar 20 08:54:32 crc kubenswrapper[5136]: I0320 08:54:32.413427 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21e9b60d-f307-406d-9085-fbd9d8b67cf5" path="/var/lib/kubelet/pods/21e9b60d-f307-406d-9085-fbd9d8b67cf5/volumes" Mar 20 08:54:34 crc kubenswrapper[5136]: I0320 08:54:34.637261 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:34 crc kubenswrapper[5136]: I0320 08:54:34.637547 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Mar 20 08:54:36 crc kubenswrapper[5136]: I0320 08:54:36.648485 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:36 crc kubenswrapper[5136]: I0320 08:54:36.649174 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Mar 20 08:54:36 crc kubenswrapper[5136]: I0320 08:54:36.662214 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:37 crc kubenswrapper[5136]: I0320 08:54:37.169699 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.552802 5136 scope.go:117] "RemoveContainer" containerID="ae45294b801e93d47563db9ba4054a170a4f53699928ebbc069e3e19b4610e4f" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.572702 5136 scope.go:117] "RemoveContainer" containerID="943e6011fb2bb8f85aa7e1232523d7da6d707090421691ca85ab0e7998c29b98" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.628029 5136 scope.go:117] "RemoveContainer" containerID="fa17c24ddb45b337fff5f936348cd486ca94ec7240798c49bc135753e4d62ff4" Mar 20 08:54:39 crc kubenswrapper[5136]: I0320 08:54:39.671912 5136 scope.go:117] "RemoveContainer" containerID="ea20f727b60adf5691fc1981831b3690eb78cdb64d09999efea953786d4a4eb5" Mar 20 08:54:45 crc kubenswrapper[5136]: I0320 08:54:45.822366 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:54:45 crc kubenswrapper[5136]: I0320 08:54:45.822900 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:54:47 crc kubenswrapper[5136]: I0320 08:54:47.059164 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:54:47 crc kubenswrapper[5136]: I0320 08:54:47.069693 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-645md"] Mar 20 08:54:48 crc kubenswrapper[5136]: I0320 08:54:48.410780 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e023c878-7ddf-478a-9069-85d32b1d5bf9" path="/var/lib/kubelet/pods/e023c878-7ddf-478a-9069-85d32b1d5bf9/volumes" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.010929 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:49 crc kubenswrapper[5136]: E0320 08:54:49.011376 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="init" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.011397 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="init" Mar 20 08:54:49 crc kubenswrapper[5136]: E0320 08:54:49.011421 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.011430 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.011645 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a53d28b6-bc47-4aa3-a413-3716651dc331" containerName="dnsmasq-dns" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.012759 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016268 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016509 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016636 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.016783 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-cz4mq" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032261 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032417 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032445 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032497 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.032547 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.045866 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.061589 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.061992 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" containerID="cri-o://de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.062182 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" containerID="cri-o://10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.133469 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.133770 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" containerID="cri-o://ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.133901 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" containerID="cri-o://50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" gracePeriod=30 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135538 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135590 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135652 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135705 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.135864 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.136196 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.137187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.137588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.150435 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.157196 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"horizon-645f4b9fd9-z58jz\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.166628 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.168326 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.195101 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237659 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237729 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237802 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237849 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.237886 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.282535 5136 generic.go:334] "Generic (PLEG): container finished" podID="c50cd831-27ab-475b-a608-0558c610394d" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" exitCode=143 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.282613 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerDied","Data":"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3"} Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.286620 5136 generic.go:334] "Generic (PLEG): container finished" podID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerID="de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26" exitCode=143 Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.286665 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerDied","Data":"de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26"} Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339282 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339321 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339362 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339432 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339475 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.339717 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.340138 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.340393 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.340726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.342841 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.361847 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"horizon-66d59c77bf-fzn52\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.586504 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.813587 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:49 crc kubenswrapper[5136]: I0320 08:54:49.824635 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.010153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:50 crc kubenswrapper[5136]: W0320 08:54:50.017621 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9e26843c_d392_464f_9f00_df9da3231a43.slice/crio-c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f WatchSource:0}: Error finding container c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f: Status 404 returned error can't find the container with id c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.298157 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerStarted","Data":"5c552b108729deb67449c97043a73b66ec936eb1de7cf4e53f9755a54666ffef"} Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.299623 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerStarted","Data":"c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f"} Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.817041 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.851773 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.853366 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.864146 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.890311 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908304 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908368 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908409 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908431 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.908534 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.956913 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.995312 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:54:50 crc kubenswrapper[5136]: I0320 08:54:50.997201 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.010877 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.010914 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.010970 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011016 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011060 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011100 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011124 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.011894 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.014918 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.017781 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.018242 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.020582 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.033422 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.034120 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"horizon-cc6c6d576-wrwl5\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.038226 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112468 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112532 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112680 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.112720 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.113082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.113229 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.183103 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215118 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215206 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215263 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215288 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215364 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215386 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215444 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.215663 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.216160 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.217379 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.219731 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.220320 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.220785 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.238642 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"horizon-96f64bfb8-g7cfv\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.415397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.685680 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:54:51 crc kubenswrapper[5136]: I0320 08:54:51.901688 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:54:51 crc kubenswrapper[5136]: W0320 08:54:51.912454 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07e0c938_d0f6_43dc_8864_68149aedc96c.slice/crio-b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f WatchSource:0}: Error finding container b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f: Status 404 returned error can't find the container with id b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.365664 5136 generic.go:334] "Generic (PLEG): container finished" podID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerID="10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420" exitCode=0 Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.365707 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerDied","Data":"10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420"} Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.368154 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerStarted","Data":"b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f"} Mar 20 08:54:52 crc kubenswrapper[5136]: I0320 08:54:52.369867 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerStarted","Data":"21c759da75cd0657471e5a123c71e4531a1ea5ab95f97bf42da6363bf11ca95c"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.041827 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.054878 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175431 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175516 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175552 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175613 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175694 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175760 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175794 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175844 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175882 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") pod \"416c7b2f-db10-4191-821f-19c79bf4a3b6\" (UID: \"416c7b2f-db10-4191-821f-19c79bf4a3b6\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.175947 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") pod \"c50cd831-27ab-475b-a608-0558c610394d\" (UID: \"c50cd831-27ab-475b-a608-0558c610394d\") " Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.177348 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs" (OuterVolumeSpecName: "logs") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.177324 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs" (OuterVolumeSpecName: "logs") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.177631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.178318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.190042 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts" (OuterVolumeSpecName: "scripts") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.190187 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl" (OuterVolumeSpecName: "kube-api-access-7b6rl") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "kube-api-access-7b6rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.190433 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf" (OuterVolumeSpecName: "kube-api-access-l26sf") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "kube-api-access-l26sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.195892 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts" (OuterVolumeSpecName: "scripts") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.265076 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data" (OuterVolumeSpecName: "config-data") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.265159 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.268317 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data" (OuterVolumeSpecName: "config-data") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.274024 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.275617 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "416c7b2f-db10-4191-821f-19c79bf4a3b6" (UID: "416c7b2f-db10-4191-821f-19c79bf4a3b6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.276249 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c50cd831-27ab-475b-a608-0558c610394d" (UID: "c50cd831-27ab-475b-a608-0558c610394d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279021 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7b6rl\" (UniqueName: \"kubernetes.io/projected/416c7b2f-db10-4191-821f-19c79bf4a3b6-kube-api-access-7b6rl\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279058 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279069 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279137 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279157 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c50cd831-27ab-475b-a608-0558c610394d-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279169 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279427 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279441 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/416c7b2f-db10-4191-821f-19c79bf4a3b6-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279449 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279461 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279491 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l26sf\" (UniqueName: \"kubernetes.io/projected/c50cd831-27ab-475b-a608-0558c610394d-kube-api-access-l26sf\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279499 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279507 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/416c7b2f-db10-4191-821f-19c79bf4a3b6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.279517 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c50cd831-27ab-475b-a608-0558c610394d-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387052 5136 generic.go:334] "Generic (PLEG): container finished" podID="c50cd831-27ab-475b-a608-0558c610394d" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" exitCode=0 Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387129 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerDied","Data":"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c50cd831-27ab-475b-a608-0558c610394d","Type":"ContainerDied","Data":"0a442b375725a08359ac9c238f48642a4c758f6fef43750c9ef6734e62c274b1"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387175 5136 scope.go:117] "RemoveContainer" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.387307 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.391710 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"416c7b2f-db10-4191-821f-19c79bf4a3b6","Type":"ContainerDied","Data":"18bbdbf6b4b7085096b8a4c5650b4a999121b8fffe8ad31c3a29f6c89c1e9ff8"} Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.391763 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.438713 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.451895 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.460566 5136 scope.go:117] "RemoveContainer" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.467880 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468318 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468332 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468356 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468362 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468367 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468374 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.468390 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468407 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468604 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468619 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468630 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" containerName="glance-httpd" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.468638 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c50cd831-27ab-475b-a608-0558c610394d" containerName="glance-log" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.469603 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.471834 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.472462 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zsfgx" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.472680 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.472806 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.477408 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.488584 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.510539 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.519891 5136 scope.go:117] "RemoveContainer" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.520216 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.521327 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c\": container with ID starting with 50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c not found: ID does not exist" containerID="50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521393 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c"} err="failed to get container status \"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c\": rpc error: code = NotFound desc = could not find container \"50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c\": container with ID starting with 50c452b18075758f0cf7f71c5dd923d9961f9b1f4ea4ea5b12f828970fcbc61c not found: ID does not exist" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521421 5136 scope.go:117] "RemoveContainer" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" Mar 20 08:54:53 crc kubenswrapper[5136]: E0320 08:54:53.521849 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3\": container with ID starting with ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3 not found: ID does not exist" containerID="ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521876 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521874 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3"} err="failed to get container status \"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3\": rpc error: code = NotFound desc = could not find container \"ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3\": container with ID starting with ab66d88d3293a40338b7fddfe88ed7191c405a72189f7c0ee894a68ec1597fc3 not found: ID does not exist" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.521939 5136 scope.go:117] "RemoveContainer" containerID="10a9b30b7e2523cc2c1ff704e0f7c5ae56f024de6f82bf322b7b1d7e0001c420" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.525939 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.525994 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.551460 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.571368 5136 scope.go:117] "RemoveContainer" containerID="de0681385b751e6fbc3db0d7c1a647f36f508772ed5c4a96334e28fe70febb26" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586458 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586640 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586683 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586731 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.586945 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587022 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587067 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587134 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587183 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587264 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587307 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587427 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.587470 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689391 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689452 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689476 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689491 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689541 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689565 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689599 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689615 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689639 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689665 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689729 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.689770 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.690213 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.690773 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.691075 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.691268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.697313 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.697344 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.697902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.702601 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.702876 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.706291 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.708369 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.708581 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.710148 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"glance-default-internal-api-0\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.710582 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " pod="openstack/glance-default-external-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.804391 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 08:54:53 crc kubenswrapper[5136]: I0320 08:54:53.846243 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 08:54:54 crc kubenswrapper[5136]: I0320 08:54:54.406273 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="416c7b2f-db10-4191-821f-19c79bf4a3b6" path="/var/lib/kubelet/pods/416c7b2f-db10-4191-821f-19c79bf4a3b6/volumes" Mar 20 08:54:54 crc kubenswrapper[5136]: I0320 08:54:54.407318 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c50cd831-27ab-475b-a608-0558c610394d" path="/var/lib/kubelet/pods/c50cd831-27ab-475b-a608-0558c610394d/volumes" Mar 20 08:54:58 crc kubenswrapper[5136]: I0320 08:54:58.950516 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 08:54:58 crc kubenswrapper[5136]: W0320 08:54:58.954029 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6345b1ce_d7d2_420d_8631_e42fd662d790.slice/crio-118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490 WatchSource:0}: Error finding container 118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490: Status 404 returned error can't find the container with id 118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.067449 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 08:54:59 crc kubenswrapper[5136]: W0320 08:54:59.076025 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod25254bce_daf4_4521_ae48_e6c53e458cb4.slice/crio-b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18 WatchSource:0}: Error finding container b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18: Status 404 returned error can't find the container with id b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.449078 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerStarted","Data":"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.449117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerStarted","Data":"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.450789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerStarted","Data":"47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.450858 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerStarted","Data":"3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.450966 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66d59c77bf-fzn52" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" containerID="cri-o://3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.451196 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-66d59c77bf-fzn52" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" containerID="cri-o://47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.452276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerStarted","Data":"118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.459276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerStarted","Data":"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.459325 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerStarted","Data":"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.462187 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerStarted","Data":"b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerStarted","Data":"eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467807 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerStarted","Data":"cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1"} Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467922 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-645f4b9fd9-z58jz" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" containerID="cri-o://cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.467944 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-645f4b9fd9-z58jz" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" containerID="cri-o://eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd" gracePeriod=30 Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.475531 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-cc6c6d576-wrwl5" podStartSLOduration=2.688245107 podStartE2EDuration="9.475512489s" podCreationTimestamp="2026-03-20 08:54:50 +0000 UTC" firstStartedPulling="2026-03-20 08:54:51.702154228 +0000 UTC m=+7523.961465379" lastFinishedPulling="2026-03-20 08:54:58.48942161 +0000 UTC m=+7530.748732761" observedRunningTime="2026-03-20 08:54:59.471592669 +0000 UTC m=+7531.730903820" watchObservedRunningTime="2026-03-20 08:54:59.475512489 +0000 UTC m=+7531.734823640" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.497546 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-645f4b9fd9-z58jz" podStartSLOduration=2.765432888 podStartE2EDuration="11.497528112s" podCreationTimestamp="2026-03-20 08:54:48 +0000 UTC" firstStartedPulling="2026-03-20 08:54:49.824373002 +0000 UTC m=+7522.083684173" lastFinishedPulling="2026-03-20 08:54:58.556468246 +0000 UTC m=+7530.815779397" observedRunningTime="2026-03-20 08:54:59.491512795 +0000 UTC m=+7531.750823946" watchObservedRunningTime="2026-03-20 08:54:59.497528112 +0000 UTC m=+7531.756839263" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.514775 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-66d59c77bf-fzn52" podStartSLOduration=2.044896158 podStartE2EDuration="10.514759434s" podCreationTimestamp="2026-03-20 08:54:49 +0000 UTC" firstStartedPulling="2026-03-20 08:54:50.019671458 +0000 UTC m=+7522.278982609" lastFinishedPulling="2026-03-20 08:54:58.489534734 +0000 UTC m=+7530.748845885" observedRunningTime="2026-03-20 08:54:59.513991501 +0000 UTC m=+7531.773302652" watchObservedRunningTime="2026-03-20 08:54:59.514759434 +0000 UTC m=+7531.774070585" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.543233 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-96f64bfb8-g7cfv" podStartSLOduration=2.901387295 podStartE2EDuration="9.543211855s" podCreationTimestamp="2026-03-20 08:54:50 +0000 UTC" firstStartedPulling="2026-03-20 08:54:51.914147861 +0000 UTC m=+7524.173459002" lastFinishedPulling="2026-03-20 08:54:58.555972411 +0000 UTC m=+7530.815283562" observedRunningTime="2026-03-20 08:54:59.531631777 +0000 UTC m=+7531.790942938" watchObservedRunningTime="2026-03-20 08:54:59.543211855 +0000 UTC m=+7531.802523006" Mar 20 08:54:59 crc kubenswrapper[5136]: I0320 08:54:59.594008 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.479357 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerStarted","Data":"8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.480007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerStarted","Data":"62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.483850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerStarted","Data":"7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.483891 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerStarted","Data":"9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333"} Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.508567 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.5085468330000005 podStartE2EDuration="7.508546833s" podCreationTimestamp="2026-03-20 08:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:55:00.505461698 +0000 UTC m=+7532.764772849" watchObservedRunningTime="2026-03-20 08:55:00.508546833 +0000 UTC m=+7532.767857984" Mar 20 08:55:00 crc kubenswrapper[5136]: I0320 08:55:00.547983 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.547957383 podStartE2EDuration="7.547957383s" podCreationTimestamp="2026-03-20 08:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:55:00.53234352 +0000 UTC m=+7532.791654671" watchObservedRunningTime="2026-03-20 08:55:00.547957383 +0000 UTC m=+7532.807268534" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.183691 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.184109 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.416086 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:01 crc kubenswrapper[5136]: I0320 08:55:01.416160 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.805007 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.805070 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.838582 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.847235 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.847277 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.849382 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.881529 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:55:03 crc kubenswrapper[5136]: I0320 08:55:03.897073 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531389 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531513 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531533 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:04 crc kubenswrapper[5136]: I0320 08:55:04.531548 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Mar 20 08:55:06 crc kubenswrapper[5136]: I0320 08:55:06.741359 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:06 crc kubenswrapper[5136]: I0320 08:55:06.744141 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Mar 20 08:55:07 crc kubenswrapper[5136]: I0320 08:55:07.294371 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:55:07 crc kubenswrapper[5136]: I0320 08:55:07.516196 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Mar 20 08:55:09 crc kubenswrapper[5136]: I0320 08:55:09.340311 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:55:11 crc kubenswrapper[5136]: I0320 08:55:11.185200 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:11 crc kubenswrapper[5136]: I0320 08:55:11.418506 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.821733 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822079 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822129 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822871 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 08:55:15 crc kubenswrapper[5136]: I0320 08:55:15.822920 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" gracePeriod=600 Mar 20 08:55:15 crc kubenswrapper[5136]: E0320 08:55:15.954034 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.680193 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" exitCode=0 Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.680457 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2"} Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.680577 5136 scope.go:117] "RemoveContainer" containerID="a43b1feb308763542c53114c5f178c20bc1d59b30b0c579b39a73e99b6e66c62" Mar 20 08:55:16 crc kubenswrapper[5136]: I0320 08:55:16.681612 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:16 crc kubenswrapper[5136]: E0320 08:55:16.681920 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:23 crc kubenswrapper[5136]: I0320 08:55:23.278966 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:23 crc kubenswrapper[5136]: I0320 08:55:23.332711 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:24 crc kubenswrapper[5136]: I0320 08:55:24.963098 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.030355 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.031346 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" containerID="cri-o://83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" gracePeriod=30 Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.031550 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" containerID="cri-o://87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" gracePeriod=30 Mar 20 08:55:25 crc kubenswrapper[5136]: I0320 08:55:25.042678 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": EOF" Mar 20 08:55:28 crc kubenswrapper[5136]: I0320 08:55:28.170076 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": read tcp 10.217.0.2:47534->10.217.1.151:8443: read: connection reset by peer" Mar 20 08:55:28 crc kubenswrapper[5136]: I0320 08:55:28.795121 5136 generic.go:334] "Generic (PLEG): container finished" podID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" exitCode=0 Mar 20 08:55:28 crc kubenswrapper[5136]: I0320 08:55:28.795177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerDied","Data":"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.808576 5136 generic.go:334] "Generic (PLEG): container finished" podID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerID="eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.808950 5136 generic.go:334] "Generic (PLEG): container finished" podID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerID="cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.808656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerDied","Data":"eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.809033 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerDied","Data":"cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811023 5136 generic.go:334] "Generic (PLEG): container finished" podID="9e26843c-d392-464f-9f00-df9da3231a43" containerID="47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811042 5136 generic.go:334] "Generic (PLEG): container finished" podID="9e26843c-d392-464f-9f00-df9da3231a43" containerID="3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b" exitCode=137 Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerDied","Data":"47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.811081 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerDied","Data":"3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b"} Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.940297 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:55:29 crc kubenswrapper[5136]: I0320 08:55:29.946294 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.089972 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090018 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090089 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090170 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") pod \"d6a4482e-e21a-4e56-af69-e824ef4708da\" (UID: \"d6a4482e-e21a-4e56-af69-e824ef4708da\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090361 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090397 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") pod \"9e26843c-d392-464f-9f00-df9da3231a43\" (UID: \"9e26843c-d392-464f-9f00-df9da3231a43\") " Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.090775 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs" (OuterVolumeSpecName: "logs") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.092991 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs" (OuterVolumeSpecName: "logs") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.095455 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.096252 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv" (OuterVolumeSpecName: "kube-api-access-7vqxv") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "kube-api-access-7vqxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.097458 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.099237 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz" (OuterVolumeSpecName: "kube-api-access-4jxbz") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "kube-api-access-4jxbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.116091 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts" (OuterVolumeSpecName: "scripts") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.116157 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts" (OuterVolumeSpecName: "scripts") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.119295 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data" (OuterVolumeSpecName: "config-data") pod "d6a4482e-e21a-4e56-af69-e824ef4708da" (UID: "d6a4482e-e21a-4e56-af69-e824ef4708da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.123328 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data" (OuterVolumeSpecName: "config-data") pod "9e26843c-d392-464f-9f00-df9da3231a43" (UID: "9e26843c-d392-464f-9f00-df9da3231a43"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192520 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192565 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jxbz\" (UniqueName: \"kubernetes.io/projected/9e26843c-d392-464f-9f00-df9da3231a43-kube-api-access-4jxbz\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192578 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192586 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9e26843c-d392-464f-9f00-df9da3231a43-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192594 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9e26843c-d392-464f-9f00-df9da3231a43-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192602 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vqxv\" (UniqueName: \"kubernetes.io/projected/d6a4482e-e21a-4e56-af69-e824ef4708da-kube-api-access-7vqxv\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192610 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9e26843c-d392-464f-9f00-df9da3231a43-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192618 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6a4482e-e21a-4e56-af69-e824ef4708da-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192626 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/d6a4482e-e21a-4e56-af69-e824ef4708da-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.192635 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d6a4482e-e21a-4e56-af69-e824ef4708da-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.397349 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:30 crc kubenswrapper[5136]: E0320 08:55:30.397578 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.821948 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-645f4b9fd9-z58jz" event={"ID":"d6a4482e-e21a-4e56-af69-e824ef4708da","Type":"ContainerDied","Data":"5c552b108729deb67449c97043a73b66ec936eb1de7cf4e53f9755a54666ffef"} Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.822301 5136 scope.go:117] "RemoveContainer" containerID="eca3fb3cc4f1e7c4ed22ce895d18b9a727e745be0196ad15ebd13d381ede98bd" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.822074 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-645f4b9fd9-z58jz" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.825214 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66d59c77bf-fzn52" event={"ID":"9e26843c-d392-464f-9f00-df9da3231a43","Type":"ContainerDied","Data":"c9ee7ff456ee288905bd3817b56fde5fab98d219327f5c2603f495ba149fd86f"} Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.825382 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66d59c77bf-fzn52" Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.872153 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.885209 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-645f4b9fd9-z58jz"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.894872 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.909517 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66d59c77bf-fzn52"] Mar 20 08:55:30 crc kubenswrapper[5136]: I0320 08:55:30.992833 5136 scope.go:117] "RemoveContainer" containerID="cb14bd3149719e9313b803ec07b4e48ef12a8424fc422a0ea3648d52b2537bf1" Mar 20 08:55:31 crc kubenswrapper[5136]: I0320 08:55:31.019514 5136 scope.go:117] "RemoveContainer" containerID="47d50bc4004ffaf61df2e3967c92d24ab0f3d7825ac3b5d458bf8fcc8f180000" Mar 20 08:55:31 crc kubenswrapper[5136]: I0320 08:55:31.178275 5136 scope.go:117] "RemoveContainer" containerID="3c83219cfa1ecf8b240d76c9e51d2ba3d7c3e62fbdf8df4070902cafed1bfe2b" Mar 20 08:55:31 crc kubenswrapper[5136]: I0320 08:55:31.184247 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:32 crc kubenswrapper[5136]: I0320 08:55:32.406731 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e26843c-d392-464f-9f00-df9da3231a43" path="/var/lib/kubelet/pods/9e26843c-d392-464f-9f00-df9da3231a43/volumes" Mar 20 08:55:32 crc kubenswrapper[5136]: I0320 08:55:32.407786 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" path="/var/lib/kubelet/pods/d6a4482e-e21a-4e56-af69-e824ef4708da/volumes" Mar 20 08:55:39 crc kubenswrapper[5136]: I0320 08:55:39.835872 5136 scope.go:117] "RemoveContainer" containerID="7a4fb348e084d0c108a5953823b245c56381a86254bd1ec8ccef0cb8f458e61f" Mar 20 08:55:41 crc kubenswrapper[5136]: I0320 08:55:41.184100 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:45 crc kubenswrapper[5136]: I0320 08:55:45.397590 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:45 crc kubenswrapper[5136]: E0320 08:55:45.399181 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:55:51 crc kubenswrapper[5136]: I0320 08:55:51.184291 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-cc6c6d576-wrwl5" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.151:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.151:8443: connect: connection refused" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.413534 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469049 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469103 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469191 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469307 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.469462 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.470284 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") pod \"170b9fcc-77b0-41b5-8e99-cd95411287e9\" (UID: \"170b9fcc-77b0-41b5-8e99-cd95411287e9\") " Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.470328 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs" (OuterVolumeSpecName: "logs") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.470890 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/170b9fcc-77b0-41b5-8e99-cd95411287e9-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.475004 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l" (OuterVolumeSpecName: "kube-api-access-f297l") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "kube-api-access-f297l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.477344 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.492477 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data" (OuterVolumeSpecName: "config-data") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.507418 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts" (OuterVolumeSpecName: "scripts") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.507689 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.530624 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "170b9fcc-77b0-41b5-8e99-cd95411287e9" (UID: "170b9fcc-77b0-41b5-8e99-cd95411287e9"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572374 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572403 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f297l\" (UniqueName: \"kubernetes.io/projected/170b9fcc-77b0-41b5-8e99-cd95411287e9-kube-api-access-f297l\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572413 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572424 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572432 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/170b9fcc-77b0-41b5-8e99-cd95411287e9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:55 crc kubenswrapper[5136]: I0320 08:55:55.572440 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170b9fcc-77b0-41b5-8e99-cd95411287e9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050198 5136 generic.go:334] "Generic (PLEG): container finished" podID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" exitCode=137 Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050246 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerDied","Data":"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c"} Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050252 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-cc6c6d576-wrwl5" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050275 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-cc6c6d576-wrwl5" event={"ID":"170b9fcc-77b0-41b5-8e99-cd95411287e9","Type":"ContainerDied","Data":"21c759da75cd0657471e5a123c71e4531a1ea5ab95f97bf42da6363bf11ca95c"} Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.050296 5136 scope.go:117] "RemoveContainer" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.097213 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.105362 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-cc6c6d576-wrwl5"] Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.239689 5136 scope.go:117] "RemoveContainer" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264111 5136 scope.go:117] "RemoveContainer" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" Mar 20 08:55:56 crc kubenswrapper[5136]: E0320 08:55:56.264606 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041\": container with ID starting with 87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041 not found: ID does not exist" containerID="87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264643 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041"} err="failed to get container status \"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041\": rpc error: code = NotFound desc = could not find container \"87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041\": container with ID starting with 87994f2c1ec896b001d8142fc7518d6c3a1657b0c71eeaf3fd0c68ba00f35041 not found: ID does not exist" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264668 5136 scope.go:117] "RemoveContainer" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" Mar 20 08:55:56 crc kubenswrapper[5136]: E0320 08:55:56.264902 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c\": container with ID starting with 83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c not found: ID does not exist" containerID="83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.264923 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c"} err="failed to get container status \"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c\": rpc error: code = NotFound desc = could not find container \"83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c\": container with ID starting with 83314d37ae1ddbd13462dd59b1b33c0dbc22e1e06ec277b452f3f36173acb14c not found: ID does not exist" Mar 20 08:55:56 crc kubenswrapper[5136]: I0320 08:55:56.410490 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" path="/var/lib/kubelet/pods/170b9fcc-77b0-41b5-8e99-cd95411287e9/volumes" Mar 20 08:55:59 crc kubenswrapper[5136]: I0320 08:55:59.400000 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:55:59 crc kubenswrapper[5136]: E0320 08:55:59.400711 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.148973 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149439 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149459 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149476 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149485 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149503 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149512 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149536 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149544 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149568 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149576 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: E0320 08:56:00.149599 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149608 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149866 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149890 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149903 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="170b9fcc-77b0-41b5-8e99-cd95411287e9" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149915 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149934 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e26843c-d392-464f-9f00-df9da3231a43" containerName="horizon-log" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.149959 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6a4482e-e21a-4e56-af69-e824ef4708da" containerName="horizon" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.150671 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.152885 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.153150 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.154402 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.161141 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"auto-csr-approver-29566616-x4rw6\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.163835 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.262624 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"auto-csr-approver-29566616-x4rw6\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.281789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"auto-csr-approver-29566616-x4rw6\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.470716 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:00 crc kubenswrapper[5136]: I0320 08:56:00.888153 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 08:56:01 crc kubenswrapper[5136]: I0320 08:56:01.103920 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerStarted","Data":"ae635ab104571e95e9b835d3a19f15c3dfb79921912d157ec9fb0776109cd976"} Mar 20 08:56:02 crc kubenswrapper[5136]: I0320 08:56:02.113590 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerStarted","Data":"fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98"} Mar 20 08:56:02 crc kubenswrapper[5136]: I0320 08:56:02.127831 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" podStartSLOduration=1.224225847 podStartE2EDuration="2.127802662s" podCreationTimestamp="2026-03-20 08:56:00 +0000 UTC" firstStartedPulling="2026-03-20 08:56:00.889277357 +0000 UTC m=+7593.148588508" lastFinishedPulling="2026-03-20 08:56:01.792854172 +0000 UTC m=+7594.052165323" observedRunningTime="2026-03-20 08:56:02.12514767 +0000 UTC m=+7594.384458821" watchObservedRunningTime="2026-03-20 08:56:02.127802662 +0000 UTC m=+7594.387113813" Mar 20 08:56:03 crc kubenswrapper[5136]: I0320 08:56:03.124723 5136 generic.go:334] "Generic (PLEG): container finished" podID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerID="fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98" exitCode=0 Mar 20 08:56:03 crc kubenswrapper[5136]: I0320 08:56:03.124801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerDied","Data":"fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98"} Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.660783 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.848271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") pod \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\" (UID: \"0c085dee-ef7e-47eb-93aa-6ecf4d45030c\") " Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.856432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x" (OuterVolumeSpecName: "kube-api-access-gh59x") pod "0c085dee-ef7e-47eb-93aa-6ecf4d45030c" (UID: "0c085dee-ef7e-47eb-93aa-6ecf4d45030c"). InnerVolumeSpecName "kube-api-access-gh59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:04 crc kubenswrapper[5136]: I0320 08:56:04.950050 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh59x\" (UniqueName: \"kubernetes.io/projected/0c085dee-ef7e-47eb-93aa-6ecf4d45030c-kube-api-access-gh59x\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.142150 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" event={"ID":"0c085dee-ef7e-47eb-93aa-6ecf4d45030c","Type":"ContainerDied","Data":"ae635ab104571e95e9b835d3a19f15c3dfb79921912d157ec9fb0776109cd976"} Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.142190 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae635ab104571e95e9b835d3a19f15c3dfb79921912d157ec9fb0776109cd976" Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.142209 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566616-x4rw6" Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.206609 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:56:05 crc kubenswrapper[5136]: I0320 08:56:05.218288 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566610-hrt5r"] Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.410799 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e312a5ea-3b15-4c57-8b2d-613840a5d9ca" path="/var/lib/kubelet/pods/e312a5ea-3b15-4c57-8b2d-613840a5d9ca/volumes" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.454861 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 08:56:06 crc kubenswrapper[5136]: E0320 08:56:06.455239 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerName="oc" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.455253 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerName="oc" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.455440 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" containerName="oc" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.456315 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.469389 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.580084 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.580222 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581237 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581290 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581390 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581604 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.581730 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.682999 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683051 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683076 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683118 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683149 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683190 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.683227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.684512 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.684974 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.686756 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.694789 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.697827 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.704681 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.704878 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"horizon-55ffc4694-d4d2v\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:06 crc kubenswrapper[5136]: I0320 08:56:06.777760 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:07 crc kubenswrapper[5136]: I0320 08:56:07.323210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.168704 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.170598 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.172519 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerStarted","Data":"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d"} Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.172839 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerStarted","Data":"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa"} Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.172861 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerStarted","Data":"a94c046fe10c34e65503024040ef0cdbc5574ea11bd41c94fb47af937848986b"} Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.182985 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.184550 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.186607 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.203101 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.207395 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.228794 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-55ffc4694-d4d2v" podStartSLOduration=2.22876094 podStartE2EDuration="2.22876094s" podCreationTimestamp="2026-03-20 08:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:08.227745638 +0000 UTC m=+7600.487056789" watchObservedRunningTime="2026-03-20 08:56:08.22876094 +0000 UTC m=+7600.488072091" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.360540 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.361236 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.361412 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.361506 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463092 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463251 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463304 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.463366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.464289 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.464311 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.492559 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"heat-3e97-account-create-update-6qvr2\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.492802 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"heat-db-create-2bbqx\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.493571 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:08 crc kubenswrapper[5136]: I0320 08:56:08.523769 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.091766 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.152840 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 08:56:09 crc kubenswrapper[5136]: W0320 08:56:09.153127 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod87521532_0534_4e37_9c80_809877f2a744.slice/crio-b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca WatchSource:0}: Error finding container b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca: Status 404 returned error can't find the container with id b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.185717 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bbqx" event={"ID":"7f0f0206-8535-4184-ae20-349019be47b2","Type":"ContainerStarted","Data":"f0faad162fed620a9f564868892f2e09bf11b7882145cdf840c0cd841d342c8c"} Mar 20 08:56:09 crc kubenswrapper[5136]: I0320 08:56:09.187778 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3e97-account-create-update-6qvr2" event={"ID":"87521532-0534-4e37-9c80-809877f2a744","Type":"ContainerStarted","Data":"b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca"} Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.257609 5136 generic.go:334] "Generic (PLEG): container finished" podID="87521532-0534-4e37-9c80-809877f2a744" containerID="bd3d02ee4935523ab4eb4492588717b04d2271f1f22be17fbab8ebb01a7e4c49" exitCode=0 Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.258488 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3e97-account-create-update-6qvr2" event={"ID":"87521532-0534-4e37-9c80-809877f2a744","Type":"ContainerDied","Data":"bd3d02ee4935523ab4eb4492588717b04d2271f1f22be17fbab8ebb01a7e4c49"} Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.268782 5136 generic.go:334] "Generic (PLEG): container finished" podID="7f0f0206-8535-4184-ae20-349019be47b2" containerID="0596189127fdfe0bb4f8c43c9a281f3d0d01a460eb398984e9cddcf692a4beaa" exitCode=0 Mar 20 08:56:10 crc kubenswrapper[5136]: I0320 08:56:10.268839 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bbqx" event={"ID":"7f0f0206-8535-4184-ae20-349019be47b2","Type":"ContainerDied","Data":"0596189127fdfe0bb4f8c43c9a281f3d0d01a460eb398984e9cddcf692a4beaa"} Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.637435 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.771295 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") pod \"87521532-0534-4e37-9c80-809877f2a744\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.771452 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") pod \"87521532-0534-4e37-9c80-809877f2a744\" (UID: \"87521532-0534-4e37-9c80-809877f2a744\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.771938 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87521532-0534-4e37-9c80-809877f2a744" (UID: "87521532-0534-4e37-9c80-809877f2a744"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.772176 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87521532-0534-4e37-9c80-809877f2a744-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.777131 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n" (OuterVolumeSpecName: "kube-api-access-x8r7n") pod "87521532-0534-4e37-9c80-809877f2a744" (UID: "87521532-0534-4e37-9c80-809877f2a744"). InnerVolumeSpecName "kube-api-access-x8r7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.840690 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.873498 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x8r7n\" (UniqueName: \"kubernetes.io/projected/87521532-0534-4e37-9c80-809877f2a744-kube-api-access-x8r7n\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.974471 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") pod \"7f0f0206-8535-4184-ae20-349019be47b2\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.974575 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") pod \"7f0f0206-8535-4184-ae20-349019be47b2\" (UID: \"7f0f0206-8535-4184-ae20-349019be47b2\") " Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.975031 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7f0f0206-8535-4184-ae20-349019be47b2" (UID: "7f0f0206-8535-4184-ae20-349019be47b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:56:11 crc kubenswrapper[5136]: I0320 08:56:11.977321 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv" (OuterVolumeSpecName: "kube-api-access-cnjgv") pod "7f0f0206-8535-4184-ae20-349019be47b2" (UID: "7f0f0206-8535-4184-ae20-349019be47b2"). InnerVolumeSpecName "kube-api-access-cnjgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.076617 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cnjgv\" (UniqueName: \"kubernetes.io/projected/7f0f0206-8535-4184-ae20-349019be47b2-kube-api-access-cnjgv\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.076668 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7f0f0206-8535-4184-ae20-349019be47b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.296297 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-2bbqx" event={"ID":"7f0f0206-8535-4184-ae20-349019be47b2","Type":"ContainerDied","Data":"f0faad162fed620a9f564868892f2e09bf11b7882145cdf840c0cd841d342c8c"} Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.296333 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0faad162fed620a9f564868892f2e09bf11b7882145cdf840c0cd841d342c8c" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.296394 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-2bbqx" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.297882 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-3e97-account-create-update-6qvr2" event={"ID":"87521532-0534-4e37-9c80-809877f2a744","Type":"ContainerDied","Data":"b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca"} Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.297910 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b89b1768c8f3567d82025d4ad6041c39319de885f379a8a39cd58707ffd25dca" Mar 20 08:56:12 crc kubenswrapper[5136]: I0320 08:56:12.297955 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-6qvr2" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.396072 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 08:56:13 crc kubenswrapper[5136]: E0320 08:56:13.397165 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f0f0206-8535-4184-ae20-349019be47b2" containerName="mariadb-database-create" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397186 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f0f0206-8535-4184-ae20-349019be47b2" containerName="mariadb-database-create" Mar 20 08:56:13 crc kubenswrapper[5136]: E0320 08:56:13.397205 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87521532-0534-4e37-9c80-809877f2a744" containerName="mariadb-account-create-update" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397213 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87521532-0534-4e37-9c80-809877f2a744" containerName="mariadb-account-create-update" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397424 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f0f0206-8535-4184-ae20-349019be47b2" containerName="mariadb-database-create" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.397471 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="87521532-0534-4e37-9c80-809877f2a744" containerName="mariadb-account-create-update" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.398782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.404790 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gs9jr" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.405761 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.414154 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.503385 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.503486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.503611 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.605846 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.605933 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.606098 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.611627 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.611702 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.625574 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"heat-db-sync-gnx9m\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:13 crc kubenswrapper[5136]: I0320 08:56:13.717594 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:14 crc kubenswrapper[5136]: I0320 08:56:14.190531 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 08:56:14 crc kubenswrapper[5136]: I0320 08:56:14.326548 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerStarted","Data":"30fd3cb470777b59d4e09990972172f0a948ce7612bcf75722bfae72d9fee57e"} Mar 20 08:56:14 crc kubenswrapper[5136]: I0320 08:56:14.397242 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:14 crc kubenswrapper[5136]: E0320 08:56:14.397486 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:16 crc kubenswrapper[5136]: I0320 08:56:16.778132 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:16 crc kubenswrapper[5136]: I0320 08:56:16.778400 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:23 crc kubenswrapper[5136]: I0320 08:56:23.444165 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerStarted","Data":"f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde"} Mar 20 08:56:23 crc kubenswrapper[5136]: I0320 08:56:23.476673 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-gnx9m" podStartSLOduration=2.076991063 podStartE2EDuration="10.476654289s" podCreationTimestamp="2026-03-20 08:56:13 +0000 UTC" firstStartedPulling="2026-03-20 08:56:14.195768471 +0000 UTC m=+7606.455079622" lastFinishedPulling="2026-03-20 08:56:22.595431707 +0000 UTC m=+7614.854742848" observedRunningTime="2026-03-20 08:56:23.469058405 +0000 UTC m=+7615.728369566" watchObservedRunningTime="2026-03-20 08:56:23.476654289 +0000 UTC m=+7615.735965440" Mar 20 08:56:24 crc kubenswrapper[5136]: I0320 08:56:24.452596 5136 generic.go:334] "Generic (PLEG): container finished" podID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerID="f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde" exitCode=0 Mar 20 08:56:24 crc kubenswrapper[5136]: I0320 08:56:24.452690 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerDied","Data":"f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde"} Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.886917 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.964967 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") pod \"d5f2ce8c-5295-423c-a81f-511d7abd0495\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.965358 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") pod \"d5f2ce8c-5295-423c-a81f-511d7abd0495\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.965426 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") pod \"d5f2ce8c-5295-423c-a81f-511d7abd0495\" (UID: \"d5f2ce8c-5295-423c-a81f-511d7abd0495\") " Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.971085 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb" (OuterVolumeSpecName: "kube-api-access-p7kdb") pod "d5f2ce8c-5295-423c-a81f-511d7abd0495" (UID: "d5f2ce8c-5295-423c-a81f-511d7abd0495"). InnerVolumeSpecName "kube-api-access-p7kdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:25 crc kubenswrapper[5136]: I0320 08:56:25.993798 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5f2ce8c-5295-423c-a81f-511d7abd0495" (UID: "d5f2ce8c-5295-423c-a81f-511d7abd0495"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.042782 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data" (OuterVolumeSpecName: "config-data") pod "d5f2ce8c-5295-423c-a81f-511d7abd0495" (UID: "d5f2ce8c-5295-423c-a81f-511d7abd0495"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.067674 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7kdb\" (UniqueName: \"kubernetes.io/projected/d5f2ce8c-5295-423c-a81f-511d7abd0495-kube-api-access-p7kdb\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.067705 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.067715 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5f2ce8c-5295-423c-a81f-511d7abd0495-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.475783 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-gnx9m" event={"ID":"d5f2ce8c-5295-423c-a81f-511d7abd0495","Type":"ContainerDied","Data":"30fd3cb470777b59d4e09990972172f0a948ce7612bcf75722bfae72d9fee57e"} Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.475843 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30fd3cb470777b59d4e09990972172f0a948ce7612bcf75722bfae72d9fee57e" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.475914 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-gnx9m" Mar 20 08:56:26 crc kubenswrapper[5136]: I0320 08:56:26.780515 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.249678 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:28 crc kubenswrapper[5136]: E0320 08:56:28.250627 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerName="heat-db-sync" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.250643 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerName="heat-db-sync" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.250842 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" containerName="heat-db-sync" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.251501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.255178 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gs9jr" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.255251 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.286099 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.287954 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320088 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320253 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320361 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.320434 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.428829 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.428942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.429067 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.429203 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.438751 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.441101 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.441477 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.470719 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"heat-engine-5644df8c69-t5dqn\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.476406 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.485013 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.491636 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.508872 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.523877 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.525693 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.530350 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532404 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532436 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.532508 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.545901 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.587712 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-gs9jr" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.596367 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.637983 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638396 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638426 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638484 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638513 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638580 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.638630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.639992 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.651742 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.652116 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.652166 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.668274 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"heat-api-7cd845d9cb-blq7p\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746133 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746303 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746533 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.746610 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.757008 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.773115 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.776882 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.780341 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"heat-cfnapi-57d5b7fdb9-xnrxq\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.860986 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:28 crc kubenswrapper[5136]: I0320 08:56:28.878317 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.181278 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.397416 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:29 crc kubenswrapper[5136]: E0320 08:56:29.397984 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:29 crc kubenswrapper[5136]: W0320 08:56:29.480202 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa3f75f3_0821_4381_9b69_18074378cbf3.slice/crio-d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061 WatchSource:0}: Error finding container d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061: Status 404 returned error can't find the container with id d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061 Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.491949 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.532059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerStarted","Data":"d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061"} Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.534046 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerStarted","Data":"8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c"} Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.534101 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerStarted","Data":"fb162b157e354506481d1c7139390a7d3e392195416ceb14e79870cc4731ee73"} Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.534209 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.572512 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5644df8c69-t5dqn" podStartSLOduration=1.572478439 podStartE2EDuration="1.572478439s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:29.566809903 +0000 UTC m=+7621.826121064" watchObservedRunningTime="2026-03-20 08:56:29.572478439 +0000 UTC m=+7621.831789590" Mar 20 08:56:29 crc kubenswrapper[5136]: W0320 08:56:29.597949 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod934aadcd_ca9b_42cf_a5a8_4474010a97a7.slice/crio-1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743 WatchSource:0}: Error finding container 1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743: Status 404 returned error can't find the container with id 1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743 Mar 20 08:56:29 crc kubenswrapper[5136]: I0320 08:56:29.600829 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:30 crc kubenswrapper[5136]: I0320 08:56:30.553849 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerStarted","Data":"1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743"} Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.571162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerStarted","Data":"551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43"} Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.572192 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.573276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerStarted","Data":"8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469"} Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.573519 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.590972 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" podStartSLOduration=2.315507643 podStartE2EDuration="4.590954812s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="2026-03-20 08:56:29.482679238 +0000 UTC m=+7621.741990389" lastFinishedPulling="2026-03-20 08:56:31.758126407 +0000 UTC m=+7624.017437558" observedRunningTime="2026-03-20 08:56:32.587846715 +0000 UTC m=+7624.847157856" watchObservedRunningTime="2026-03-20 08:56:32.590954812 +0000 UTC m=+7624.850265963" Mar 20 08:56:32 crc kubenswrapper[5136]: I0320 08:56:32.618380 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7cd845d9cb-blq7p" podStartSLOduration=2.460320267 podStartE2EDuration="4.61835586s" podCreationTimestamp="2026-03-20 08:56:28 +0000 UTC" firstStartedPulling="2026-03-20 08:56:29.600970271 +0000 UTC m=+7621.860281422" lastFinishedPulling="2026-03-20 08:56:31.759005864 +0000 UTC m=+7624.018317015" observedRunningTime="2026-03-20 08:56:32.61122243 +0000 UTC m=+7624.870533591" watchObservedRunningTime="2026-03-20 08:56:32.61835586 +0000 UTC m=+7624.877667011" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.264293 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.271228 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.286265 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.295720 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.297095 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.311143 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.312352 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.319881 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.319942 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.319996 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.320068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.321530 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.331188 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421258 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421310 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421341 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421411 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421436 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421452 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421477 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421497 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421544 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421655 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.421675 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.428027 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.428490 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.429751 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.440531 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"heat-engine-7659754fcd-klwkv\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523539 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523604 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523629 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523727 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523766 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523787 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523844 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.523966 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.528043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.528244 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.529132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.530606 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.531680 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.532643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.542043 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"heat-cfnapi-697448b746-tf7pw\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.542641 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"heat-api-86cfd8fb5c-2kxss\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.587713 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.620110 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:35 crc kubenswrapper[5136]: I0320 08:56:35.639960 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:36 crc kubenswrapper[5136]: W0320 08:56:36.048176 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53ac16e5_846e_40c1_a361_0815d231345a.slice/crio-d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b WatchSource:0}: Error finding container d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b: Status 404 returned error can't find the container with id d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.049912 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.184017 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.197780 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:36 crc kubenswrapper[5136]: W0320 08:56:36.199151 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525e3cbe_0215_4bd0_b835_95c6d6001b9a.slice/crio-0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1 WatchSource:0}: Error finding container 0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1: Status 404 returned error can't find the container with id 0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1 Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.338668 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.338930 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7cd845d9cb-blq7p" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" containerID="cri-o://8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469" gracePeriod=60 Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.347664 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.347883 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" containerID="cri-o://551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43" gracePeriod=60 Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.396484 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.398382 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.401415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.401601 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449067 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449357 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449421 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449603 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.450384 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.450403 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.450471 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.449623 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.451404 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.451517 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.456028 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.456532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.553291 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554016 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554045 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554107 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554140 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554168 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554227 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554257 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554275 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554305 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554326 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.554398 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.560567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.560600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.560645 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.572226 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.572358 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.573102 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"heat-api-7dbf74ffb7-gw5nj\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.611144 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerStarted","Data":"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.611203 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerStarted","Data":"d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.611393 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.613305 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerStarted","Data":"0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.614956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerStarted","Data":"422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.614982 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerStarted","Data":"39980fbfc379985ed683289951f5c79560baffaa957735d8c0f526ef670d64d1"} Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.630588 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-7659754fcd-klwkv" podStartSLOduration=1.63057073 podStartE2EDuration="1.63057073s" podCreationTimestamp="2026-03-20 08:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:36.62670808 +0000 UTC m=+7628.886019241" watchObservedRunningTime="2026-03-20 08:56:36.63057073 +0000 UTC m=+7628.889881881" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658484 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658564 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658677 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658752 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658859 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.658920 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.663726 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.664377 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.664501 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.674902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.713204 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.719780 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"heat-cfnapi-55f46cdf9d-2mcgl\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.815302 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:36 crc kubenswrapper[5136]: I0320 08:56:36.832870 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.317306 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.438025 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.676599 5136 generic.go:334] "Generic (PLEG): container finished" podID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerID="422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a" exitCode=1 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.677031 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerDied","Data":"422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.677255 5136 scope.go:117] "RemoveContainer" containerID="422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a" Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.682428 5136 generic.go:334] "Generic (PLEG): container finished" podID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerID="d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde" exitCode=1 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.682899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerDied","Data":"d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.683959 5136 scope.go:117] "RemoveContainer" containerID="d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde" Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.684697 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerStarted","Data":"3210b5910d0a16311abb43994ee90a4669047a646cbfeed19742a3d4c20fe707"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.688555 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerStarted","Data":"ca4ec121b137203fb91f384175b7088e11aa189eac35f5c700b19c4a087e9179"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.706297 5136 generic.go:334] "Generic (PLEG): container finished" podID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerID="551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43" exitCode=0 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.706406 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerDied","Data":"551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.724964 5136 generic.go:334] "Generic (PLEG): container finished" podID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerID="8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469" exitCode=0 Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.726267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerDied","Data":"8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469"} Mar 20 08:56:37 crc kubenswrapper[5136]: I0320 08:56:37.977371 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.023411 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:38 crc kubenswrapper[5136]: E0320 08:56:38.029455 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525e3cbe_0215_4bd0_b835_95c6d6001b9a.slice/crio-conmon-d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod525e3cbe_0215_4bd0_b835_95c6d6001b9a.slice/crio-d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde.scope\": RecentStats: unable to find data in memory cache]" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096377 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096462 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096528 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096595 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096633 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") pod \"fa3f75f3-0821-4381-9b69-18074378cbf3\" (UID: \"fa3f75f3-0821-4381-9b69-18074378cbf3\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096680 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.096721 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") pod \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\" (UID: \"934aadcd-ca9b-42cf-a5a8-4474010a97a7\") " Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.128061 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.199636 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.267823 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9" (OuterVolumeSpecName: "kube-api-access-8kms9") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "kube-api-access-8kms9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.267973 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47" (OuterVolumeSpecName: "kube-api-access-txf47") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "kube-api-access-txf47". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.295983 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.302333 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kms9\" (UniqueName: \"kubernetes.io/projected/fa3f75f3-0821-4381-9b69-18074378cbf3-kube-api-access-8kms9\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.302380 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.302394 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txf47\" (UniqueName: \"kubernetes.io/projected/934aadcd-ca9b-42cf-a5a8-4474010a97a7-kube-api-access-txf47\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.307999 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.328329 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.372114 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data" (OuterVolumeSpecName: "config-data") pod "fa3f75f3-0821-4381-9b69-18074378cbf3" (UID: "fa3f75f3-0821-4381-9b69-18074378cbf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.386260 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data" (OuterVolumeSpecName: "config-data") pod "934aadcd-ca9b-42cf-a5a8-4474010a97a7" (UID: "934aadcd-ca9b-42cf-a5a8-4474010a97a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404431 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404480 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404493 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa3f75f3-0821-4381-9b69-18074378cbf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.404505 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/934aadcd-ca9b-42cf-a5a8-4474010a97a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.740743 5136 generic.go:334] "Generic (PLEG): container finished" podID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" exitCode=1 Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.740797 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerDied","Data":"b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.741020 5136 scope.go:117] "RemoveContainer" containerID="d7941245ed006543c7b340a751fdf7534d0efba90804835ccec07fb981f3cbde" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.742105 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:38 crc kubenswrapper[5136]: E0320 08:56:38.742522 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-697448b746-tf7pw_openstack(525e3cbe-0215-4bd0-b835-95c6d6001b9a)\"" pod="openstack/heat-cfnapi-697448b746-tf7pw" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.750577 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerStarted","Data":"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.750732 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.760393 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerStarted","Data":"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.760462 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.786034 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" event={"ID":"fa3f75f3-0821-4381-9b69-18074378cbf3","Type":"ContainerDied","Data":"d6453cc156fe8ce41d248960559aabcd01f5f57bed73da8f8701ea242043e061"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.786135 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-57d5b7fdb9-xnrxq" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.796475 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7cd845d9cb-blq7p" event={"ID":"934aadcd-ca9b-42cf-a5a8-4474010a97a7","Type":"ContainerDied","Data":"1e15cef1a88e3eca46060f66b63f368657a1520f8f3e2b59b00a1d4191407743"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.796591 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7cd845d9cb-blq7p" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.834258 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" podStartSLOduration=2.834230866 podStartE2EDuration="2.834230866s" podCreationTimestamp="2026-03-20 08:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:38.805432124 +0000 UTC m=+7631.064743285" watchObservedRunningTime="2026-03-20 08:56:38.834230866 +0000 UTC m=+7631.093542017" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.856442 5136 generic.go:334] "Generic (PLEG): container finished" podID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" exitCode=1 Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.857538 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerDied","Data":"99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b"} Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.858032 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:38 crc kubenswrapper[5136]: E0320 08:56:38.858727 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86cfd8fb5c-2kxss_openstack(51d025cc-fa04-4871-a937-d0967d7aecf8)\"" pod="openstack/heat-api-86cfd8fb5c-2kxss" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.887787 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7dbf74ffb7-gw5nj" podStartSLOduration=2.887765583 podStartE2EDuration="2.887765583s" podCreationTimestamp="2026-03-20 08:56:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:56:38.8369689 +0000 UTC m=+7631.096280051" watchObservedRunningTime="2026-03-20 08:56:38.887765583 +0000 UTC m=+7631.147076734" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.909587 5136 scope.go:117] "RemoveContainer" containerID="551cf6455a7f1a2c0342803fffb1926a0492784af45f7717333269d72d5d5b43" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.935666 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.945955 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-57d5b7fdb9-xnrxq"] Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.946244 5136 scope.go:117] "RemoveContainer" containerID="8948947ccc38fddba8f1e4bf60ba5283b5e0ca552cec8c2bcd0f1ef15f82e469" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.966981 5136 scope.go:117] "RemoveContainer" containerID="422aee8174cf59684d77d072b55a1c197dd730359392605486620f801d1fb15a" Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.973458 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:38 crc kubenswrapper[5136]: I0320 08:56:38.983085 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7cd845d9cb-blq7p"] Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.870614 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:39 crc kubenswrapper[5136]: E0320 08:56:39.871229 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86cfd8fb5c-2kxss_openstack(51d025cc-fa04-4871-a937-d0967d7aecf8)\"" pod="openstack/heat-api-86cfd8fb5c-2kxss" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.874451 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:39 crc kubenswrapper[5136]: E0320 08:56:39.874855 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-697448b746-tf7pw_openstack(525e3cbe-0215-4bd0-b835-95c6d6001b9a)\"" pod="openstack/heat-cfnapi-697448b746-tf7pw" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.878911 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:39 crc kubenswrapper[5136]: I0320 08:56:39.979020 5136 scope.go:117] "RemoveContainer" containerID="482a97e7c7d9733c356a59b74d29e1b51c08c0378829f0707d6918c34c51d893" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.408370 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" path="/var/lib/kubelet/pods/934aadcd-ca9b-42cf-a5a8-4474010a97a7/volumes" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.408944 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" path="/var/lib/kubelet/pods/fa3f75f3-0821-4381-9b69-18074378cbf3/volumes" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.620431 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.620753 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.641132 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.641169 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.881354 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:40 crc kubenswrapper[5136]: I0320 08:56:40.881505 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:40 crc kubenswrapper[5136]: E0320 08:56:40.881709 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-697448b746-tf7pw_openstack(525e3cbe-0215-4bd0-b835-95c6d6001b9a)\"" pod="openstack/heat-cfnapi-697448b746-tf7pw" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" Mar 20 08:56:40 crc kubenswrapper[5136]: E0320 08:56:40.881717 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-86cfd8fb5c-2kxss_openstack(51d025cc-fa04-4871-a937-d0967d7aecf8)\"" pod="openstack/heat-api-86cfd8fb5c-2kxss" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.397046 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:41 crc kubenswrapper[5136]: E0320 08:56:41.397393 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.588308 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.652074 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.652306 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" containerID="cri-o://2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" gracePeriod=30 Mar 20 08:56:41 crc kubenswrapper[5136]: I0320 08:56:41.652950 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" containerID="cri-o://60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" gracePeriod=30 Mar 20 08:56:45 crc kubenswrapper[5136]: I0320 08:56:45.929869 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerDied","Data":"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0"} Mar 20 08:56:45 crc kubenswrapper[5136]: I0320 08:56:45.929890 5136 generic.go:334] "Generic (PLEG): container finished" podID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" exitCode=0 Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.155166 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.189341 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.241437 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.281193 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:48 crc kubenswrapper[5136]: I0320 08:56:48.971217 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.377226 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.395014 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522666 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522725 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522749 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522805 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.522892 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.523077 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") pod \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\" (UID: \"525e3cbe-0215-4bd0-b835-95c6d6001b9a\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.523180 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.523226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") pod \"51d025cc-fa04-4871-a937-d0967d7aecf8\" (UID: \"51d025cc-fa04-4871-a937-d0967d7aecf8\") " Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.529412 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.530001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb" (OuterVolumeSpecName: "kube-api-access-wqgkb") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "kube-api-access-wqgkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.531942 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.532867 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv" (OuterVolumeSpecName: "kube-api-access-h45cv") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "kube-api-access-h45cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.556251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.564227 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.584545 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data" (OuterVolumeSpecName: "config-data") pod "525e3cbe-0215-4bd0-b835-95c6d6001b9a" (UID: "525e3cbe-0215-4bd0-b835-95c6d6001b9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.595626 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data" (OuterVolumeSpecName: "config-data") pod "51d025cc-fa04-4871-a937-d0967d7aecf8" (UID: "51d025cc-fa04-4871-a937-d0967d7aecf8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625285 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqgkb\" (UniqueName: \"kubernetes.io/projected/525e3cbe-0215-4bd0-b835-95c6d6001b9a-kube-api-access-wqgkb\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625363 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625378 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h45cv\" (UniqueName: \"kubernetes.io/projected/51d025cc-fa04-4871-a937-d0967d7aecf8-kube-api-access-h45cv\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625389 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625429 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625442 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625453 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/51d025cc-fa04-4871-a937-d0967d7aecf8-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.625468 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/525e3cbe-0215-4bd0-b835-95c6d6001b9a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.984665 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-697448b746-tf7pw" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.984659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-697448b746-tf7pw" event={"ID":"525e3cbe-0215-4bd0-b835-95c6d6001b9a","Type":"ContainerDied","Data":"0117f8454b4b92087fd6d028cc295e56c9566ccd0cea7c0fc8c9dfc4ba503aa1"} Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.985086 5136 scope.go:117] "RemoveContainer" containerID="b59ae2f9ff40099b2be5fe13eb938b733e97fba443e458de99f2f275ddc990eb" Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.986356 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-86cfd8fb5c-2kxss" event={"ID":"51d025cc-fa04-4871-a937-d0967d7aecf8","Type":"ContainerDied","Data":"39980fbfc379985ed683289951f5c79560baffaa957735d8c0f526ef670d64d1"} Mar 20 08:56:49 crc kubenswrapper[5136]: I0320 08:56:49.986393 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-86cfd8fb5c-2kxss" Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.020345 5136 scope.go:117] "RemoveContainer" containerID="99b39483f0a530bffc60cde69301f0d30d01610550aa2bb87546c31a1ae6964b" Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.021520 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.038186 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-86cfd8fb5c-2kxss"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.047170 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.057477 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-697448b746-tf7pw"] Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.407602 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" path="/var/lib/kubelet/pods/51d025cc-fa04-4871-a937-d0967d7aecf8/volumes" Mar 20 08:56:50 crc kubenswrapper[5136]: I0320 08:56:50.408129 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" path="/var/lib/kubelet/pods/525e3cbe-0215-4bd0-b835-95c6d6001b9a/volumes" Mar 20 08:56:51 crc kubenswrapper[5136]: I0320 08:56:51.416908 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:56:52 crc kubenswrapper[5136]: I0320 08:56:52.397412 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:56:52 crc kubenswrapper[5136]: E0320 08:56:52.397743 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:56:55 crc kubenswrapper[5136]: I0320 08:56:55.634846 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 08:56:55 crc kubenswrapper[5136]: I0320 08:56:55.690226 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:56:55 crc kubenswrapper[5136]: I0320 08:56:55.690446 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5644df8c69-t5dqn" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" containerID="cri-o://8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" gracePeriod=60 Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.615493 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.641364 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.643614 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 08:56:58 crc kubenswrapper[5136]: E0320 08:56:58.643652 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5644df8c69-t5dqn" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:01 crc kubenswrapper[5136]: I0320 08:57:01.416979 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.055075 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.064656 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-gqtht"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.080109 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.093684 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-24c6-account-create-update-625nw"] Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.096106 5136 generic.go:334] "Generic (PLEG): container finished" podID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" exitCode=0 Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.096141 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerDied","Data":"8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c"} Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.322425 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.411241 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06c8bf45-d717-45f4-9679-7f6b69835f8a" path="/var/lib/kubelet/pods/06c8bf45-d717-45f4-9679-7f6b69835f8a/volumes" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.413412 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4aab638-4f7d-46a0-bc82-10fe569b56db" path="/var/lib/kubelet/pods/a4aab638-4f7d-46a0-bc82-10fe569b56db/volumes" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.468903 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.469137 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.469520 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.469606 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") pod \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\" (UID: \"cdfda925-e99e-45fc-9fe8-c91b77e3179e\") " Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.494210 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz" (OuterVolumeSpecName: "kube-api-access-7c6lz") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "kube-api-access-7c6lz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.505162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.528615 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.535769 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data" (OuterVolumeSpecName: "config-data") pod "cdfda925-e99e-45fc-9fe8-c91b77e3179e" (UID: "cdfda925-e99e-45fc-9fe8-c91b77e3179e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572861 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c6lz\" (UniqueName: \"kubernetes.io/projected/cdfda925-e99e-45fc-9fe8-c91b77e3179e-kube-api-access-7c6lz\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572901 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572914 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:02 crc kubenswrapper[5136]: I0320 08:57:02.572926 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cdfda925-e99e-45fc-9fe8-c91b77e3179e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.106074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5644df8c69-t5dqn" event={"ID":"cdfda925-e99e-45fc-9fe8-c91b77e3179e","Type":"ContainerDied","Data":"fb162b157e354506481d1c7139390a7d3e392195416ceb14e79870cc4731ee73"} Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.106120 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5644df8c69-t5dqn" Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.106415 5136 scope.go:117] "RemoveContainer" containerID="8234d598beb1dd46142938d95bb8bea8455b6596bad4b9020d813cc16877f24c" Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.141117 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:57:03 crc kubenswrapper[5136]: I0320 08:57:03.151038 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5644df8c69-t5dqn"] Mar 20 08:57:04 crc kubenswrapper[5136]: I0320 08:57:04.398647 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:04 crc kubenswrapper[5136]: E0320 08:57:04.398957 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:04 crc kubenswrapper[5136]: I0320 08:57:04.410845 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" path="/var/lib/kubelet/pods/cdfda925-e99e-45fc-9fe8-c91b77e3179e/volumes" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.028368 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029050 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029062 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029075 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029082 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029097 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029103 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029117 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029124 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029137 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029172 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029188 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029194 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029511 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029521 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdfda925-e99e-45fc-9fe8-c91b77e3179e" containerName="heat-engine" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029536 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="934aadcd-ca9b-42cf-a5a8-4474010a97a7" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029547 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="51d025cc-fa04-4871-a937-d0967d7aecf8" containerName="heat-api" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029560 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029568 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa3f75f3-0821-4381-9b69-18074378cbf3" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029579 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: E0320 08:57:07.029740 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.029748 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="525e3cbe-0215-4bd0-b835-95c6d6001b9a" containerName="heat-cfnapi" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.030782 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.044489 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.071708 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.071784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.071911 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174040 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174114 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174191 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174590 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.174600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.197053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"redhat-marketplace-4dn6b\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.380311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:07 crc kubenswrapper[5136]: I0320 08:57:07.898588 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:08 crc kubenswrapper[5136]: I0320 08:57:08.156268 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" exitCode=0 Mar 20 08:57:08 crc kubenswrapper[5136]: I0320 08:57:08.156361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77"} Mar 20 08:57:08 crc kubenswrapper[5136]: I0320 08:57:08.156537 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerStarted","Data":"34fda41cca48459bc52e23aa7dfd2ccdf3b10e1014efbeb2b077c36782fe119e"} Mar 20 08:57:10 crc kubenswrapper[5136]: I0320 08:57:10.181358 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" exitCode=0 Mar 20 08:57:10 crc kubenswrapper[5136]: I0320 08:57:10.181575 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472"} Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.055606 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.068163 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-grfwk"] Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.195251 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerStarted","Data":"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c"} Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.221161 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4dn6b" podStartSLOduration=2.778902193 podStartE2EDuration="5.221121345s" podCreationTimestamp="2026-03-20 08:57:06 +0000 UTC" firstStartedPulling="2026-03-20 08:57:08.158743073 +0000 UTC m=+7660.418054224" lastFinishedPulling="2026-03-20 08:57:10.600962195 +0000 UTC m=+7662.860273376" observedRunningTime="2026-03-20 08:57:11.209530456 +0000 UTC m=+7663.468841627" watchObservedRunningTime="2026-03-20 08:57:11.221121345 +0000 UTC m=+7663.480432496" Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.416096 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-96f64bfb8-g7cfv" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.152:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.152:8443: connect: connection refused" Mar 20 08:57:11 crc kubenswrapper[5136]: I0320 08:57:11.416205 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.060963 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178478 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178597 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178626 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178874 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178913 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") pod \"07e0c938-d0f6-43dc-8864-68149aedc96c\" (UID: \"07e0c938-d0f6-43dc-8864-68149aedc96c\") " Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.178980 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs" (OuterVolumeSpecName: "logs") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.179367 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/07e0c938-d0f6-43dc-8864-68149aedc96c-logs\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.185867 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.185943 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7" (OuterVolumeSpecName: "kube-api-access-pj6q7") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "kube-api-access-pj6q7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.226632 5136 generic.go:334] "Generic (PLEG): container finished" podID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" exitCode=137 Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227744 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerDied","Data":"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733"} Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227767 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-96f64bfb8-g7cfv" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227791 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-96f64bfb8-g7cfv" event={"ID":"07e0c938-d0f6-43dc-8864-68149aedc96c","Type":"ContainerDied","Data":"b1ee61ebca0b68cd44117cc6cff786c05a69987a3f3427a20fa4db8597ed371f"} Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.227826 5136 scope.go:117] "RemoveContainer" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.237405 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts" (OuterVolumeSpecName: "scripts") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.245910 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.256362 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data" (OuterVolumeSpecName: "config-data") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.269524 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "07e0c938-d0f6-43dc-8864-68149aedc96c" (UID: "07e0c938-d0f6-43dc-8864-68149aedc96c"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280912 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280943 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280956 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj6q7\" (UniqueName: \"kubernetes.io/projected/07e0c938-d0f6-43dc-8864-68149aedc96c-kube-api-access-pj6q7\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280966 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280974 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/07e0c938-d0f6-43dc-8864-68149aedc96c-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.280985 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/07e0c938-d0f6-43dc-8864-68149aedc96c-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.408221 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c6db9e6-4059-4911-b008-680848fffdbe" path="/var/lib/kubelet/pods/4c6db9e6-4059-4911-b008-680848fffdbe/volumes" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.526401 5136 scope.go:117] "RemoveContainer" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.556528 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.566239 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-96f64bfb8-g7cfv"] Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.597803 5136 scope.go:117] "RemoveContainer" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" Mar 20 08:57:12 crc kubenswrapper[5136]: E0320 08:57:12.598237 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0\": container with ID starting with 60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0 not found: ID does not exist" containerID="60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.598297 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0"} err="failed to get container status \"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0\": rpc error: code = NotFound desc = could not find container \"60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0\": container with ID starting with 60f749df76d18a10141d2062a40ea962edc0b52ceb6a36857635111c1d44eaf0 not found: ID does not exist" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.598343 5136 scope.go:117] "RemoveContainer" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" Mar 20 08:57:12 crc kubenswrapper[5136]: E0320 08:57:12.598764 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733\": container with ID starting with 2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733 not found: ID does not exist" containerID="2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733" Mar 20 08:57:12 crc kubenswrapper[5136]: I0320 08:57:12.598926 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733"} err="failed to get container status \"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733\": rpc error: code = NotFound desc = could not find container \"2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733\": container with ID starting with 2ea3b6de68a43a824c78c0e4f561d092ba769891c4d2b0ed7160300307750733 not found: ID does not exist" Mar 20 08:57:14 crc kubenswrapper[5136]: I0320 08:57:14.413049 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" path="/var/lib/kubelet/pods/07e0c938-d0f6-43dc-8864-68149aedc96c/volumes" Mar 20 08:57:15 crc kubenswrapper[5136]: I0320 08:57:15.397009 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:15 crc kubenswrapper[5136]: E0320 08:57:15.397671 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:17 crc kubenswrapper[5136]: I0320 08:57:17.381905 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:17 crc kubenswrapper[5136]: I0320 08:57:17.382141 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:17 crc kubenswrapper[5136]: I0320 08:57:17.424216 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:18 crc kubenswrapper[5136]: I0320 08:57:18.335838 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.025685 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn"] Mar 20 08:57:19 crc kubenswrapper[5136]: E0320 08:57:19.026139 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026154 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" Mar 20 08:57:19 crc kubenswrapper[5136]: E0320 08:57:19.026178 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026184 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026370 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon-log" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.026383 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e0c938-d0f6-43dc-8864-68149aedc96c" containerName="horizon" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.027668 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.030393 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.035180 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn"] Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.116625 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.116694 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.116733 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.219889 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220112 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220252 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220632 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.220648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.244624 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.406936 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:19 crc kubenswrapper[5136]: I0320 08:57:19.800374 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn"] Mar 20 08:57:20 crc kubenswrapper[5136]: I0320 08:57:20.306585 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerStarted","Data":"5109e573e6cc1cd17b6b8342da492547d9d0b3abd4199b503cf68891b2593fb2"} Mar 20 08:57:20 crc kubenswrapper[5136]: I0320 08:57:20.306624 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerStarted","Data":"fce0d666073b21cca4cea16cc70ca7c1868877700eeea616fb38a6a013d5e1d8"} Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.317053 5136 generic.go:334] "Generic (PLEG): container finished" podID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerID="5109e573e6cc1cd17b6b8342da492547d9d0b3abd4199b503cf68891b2593fb2" exitCode=0 Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.317164 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"5109e573e6cc1cd17b6b8342da492547d9d0b3abd4199b503cf68891b2593fb2"} Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.588388 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:21 crc kubenswrapper[5136]: I0320 08:57:21.588726 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4dn6b" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" containerID="cri-o://47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" gracePeriod=2 Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.078644 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.200226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") pod \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.200334 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") pod \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.200381 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") pod \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\" (UID: \"4f642bb1-6f66-4460-922a-a497e5b3dc6a\") " Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.202663 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities" (OuterVolumeSpecName: "utilities") pod "4f642bb1-6f66-4460-922a-a497e5b3dc6a" (UID: "4f642bb1-6f66-4460-922a-a497e5b3dc6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.209095 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk" (OuterVolumeSpecName: "kube-api-access-c7wsk") pod "4f642bb1-6f66-4460-922a-a497e5b3dc6a" (UID: "4f642bb1-6f66-4460-922a-a497e5b3dc6a"). InnerVolumeSpecName "kube-api-access-c7wsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.237981 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f642bb1-6f66-4460-922a-a497e5b3dc6a" (UID: "4f642bb1-6f66-4460-922a-a497e5b3dc6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.302836 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.302878 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f642bb1-6f66-4460-922a-a497e5b3dc6a-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.302894 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7wsk\" (UniqueName: \"kubernetes.io/projected/4f642bb1-6f66-4460-922a-a497e5b3dc6a-kube-api-access-c7wsk\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328386 5136 generic.go:334] "Generic (PLEG): container finished" podID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" exitCode=0 Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c"} Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328455 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4dn6b" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328473 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4dn6b" event={"ID":"4f642bb1-6f66-4460-922a-a497e5b3dc6a","Type":"ContainerDied","Data":"34fda41cca48459bc52e23aa7dfd2ccdf3b10e1014efbeb2b077c36782fe119e"} Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.328496 5136 scope.go:117] "RemoveContainer" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.368486 5136 scope.go:117] "RemoveContainer" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.370368 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.385219 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4dn6b"] Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.390272 5136 scope.go:117] "RemoveContainer" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.408956 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" path="/var/lib/kubelet/pods/4f642bb1-6f66-4460-922a-a497e5b3dc6a/volumes" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.478952 5136 scope.go:117] "RemoveContainer" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.479454 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c\": container with ID starting with 47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c not found: ID does not exist" containerID="47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.479486 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c"} err="failed to get container status \"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c\": rpc error: code = NotFound desc = could not find container \"47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c\": container with ID starting with 47f6c61e8d33c1912ea625612edd728bb902b66e835b0bc397fb7c04af91226c not found: ID does not exist" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.479511 5136 scope.go:117] "RemoveContainer" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.480077 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472\": container with ID starting with ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472 not found: ID does not exist" containerID="ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.480222 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472"} err="failed to get container status \"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472\": rpc error: code = NotFound desc = could not find container \"ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472\": container with ID starting with ef629ca2fe399a5a378746cb8ae6dd313b7c0d330eb41780a8bc10fb61e11472 not found: ID does not exist" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.480248 5136 scope.go:117] "RemoveContainer" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.481429 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77\": container with ID starting with f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77 not found: ID does not exist" containerID="f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.481458 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77"} err="failed to get container status \"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77\": rpc error: code = NotFound desc = could not find container \"f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77\": container with ID starting with f247c8e38bbb05f605b94a9227f4af9c64bdf63bed25f5037a63d33c0e0bad77 not found: ID does not exist" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.599416 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.600211 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-content" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600228 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-content" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.600241 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-utilities" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600249 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="extract-utilities" Mar 20 08:57:22 crc kubenswrapper[5136]: E0320 08:57:22.600272 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600279 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.600493 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f642bb1-6f66-4460-922a-a497e5b3dc6a" containerName="registry-server" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.602123 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.610134 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.709882 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.709972 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.710043 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.811094 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.811203 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.811284 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.813057 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.813365 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.829983 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"redhat-operators-w4zdj\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:22 crc kubenswrapper[5136]: I0320 08:57:22.930480 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:23 crc kubenswrapper[5136]: I0320 08:57:23.340382 5136 generic.go:334] "Generic (PLEG): container finished" podID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerID="d88e92b7a96a38a0d91605318624250fcbed4ac0cc4ccaad1aaad0ef4497cab2" exitCode=0 Mar 20 08:57:23 crc kubenswrapper[5136]: I0320 08:57:23.340515 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"d88e92b7a96a38a0d91605318624250fcbed4ac0cc4ccaad1aaad0ef4497cab2"} Mar 20 08:57:23 crc kubenswrapper[5136]: I0320 08:57:23.911602 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:57:23 crc kubenswrapper[5136]: W0320 08:57:23.917339 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbbd891d_c8eb_404c_8255_2a3bba4035ee.slice/crio-b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89 WatchSource:0}: Error finding container b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89: Status 404 returned error can't find the container with id b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89 Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.350544 5136 generic.go:334] "Generic (PLEG): container finished" podID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerID="044b3424c8e96991d8d6c7d14a0f5e375547bb5b4b46583de32c8b8c28a5ea26" exitCode=0 Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.350712 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"044b3424c8e96991d8d6c7d14a0f5e375547bb5b4b46583de32c8b8c28a5ea26"} Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.352720 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" exitCode=0 Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.352776 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd"} Mar 20 08:57:24 crc kubenswrapper[5136]: I0320 08:57:24.352800 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerStarted","Data":"b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89"} Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.714988 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.775229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") pod \"380bd027-6e4d-49b8-af6b-db5cd8b06635\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.775366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") pod \"380bd027-6e4d-49b8-af6b-db5cd8b06635\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.775441 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") pod \"380bd027-6e4d-49b8-af6b-db5cd8b06635\" (UID: \"380bd027-6e4d-49b8-af6b-db5cd8b06635\") " Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.778294 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle" (OuterVolumeSpecName: "bundle") pod "380bd027-6e4d-49b8-af6b-db5cd8b06635" (UID: "380bd027-6e4d-49b8-af6b-db5cd8b06635"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.782651 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62" (OuterVolumeSpecName: "kube-api-access-2lm62") pod "380bd027-6e4d-49b8-af6b-db5cd8b06635" (UID: "380bd027-6e4d-49b8-af6b-db5cd8b06635"). InnerVolumeSpecName "kube-api-access-2lm62". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.786608 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util" (OuterVolumeSpecName: "util") pod "380bd027-6e4d-49b8-af6b-db5cd8b06635" (UID: "380bd027-6e4d-49b8-af6b-db5cd8b06635"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.878427 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lm62\" (UniqueName: \"kubernetes.io/projected/380bd027-6e4d-49b8-af6b-db5cd8b06635-kube-api-access-2lm62\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.878481 5136 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:25 crc kubenswrapper[5136]: I0320 08:57:25.878503 5136 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/380bd027-6e4d-49b8-af6b-db5cd8b06635-util\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.375109 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" event={"ID":"380bd027-6e4d-49b8-af6b-db5cd8b06635","Type":"ContainerDied","Data":"fce0d666073b21cca4cea16cc70ca7c1868877700eeea616fb38a6a013d5e1d8"} Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.375152 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fce0d666073b21cca4cea16cc70ca7c1868877700eeea616fb38a6a013d5e1d8" Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.375215 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn" Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.377043 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerStarted","Data":"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515"} Mar 20 08:57:26 crc kubenswrapper[5136]: I0320 08:57:26.396992 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:26 crc kubenswrapper[5136]: E0320 08:57:26.397363 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:28 crc kubenswrapper[5136]: I0320 08:57:28.403242 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" exitCode=0 Mar 20 08:57:28 crc kubenswrapper[5136]: I0320 08:57:28.417636 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515"} Mar 20 08:57:29 crc kubenswrapper[5136]: I0320 08:57:29.415868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerStarted","Data":"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1"} Mar 20 08:57:29 crc kubenswrapper[5136]: I0320 08:57:29.442562 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w4zdj" podStartSLOduration=2.872326812 podStartE2EDuration="7.442544897s" podCreationTimestamp="2026-03-20 08:57:22 +0000 UTC" firstStartedPulling="2026-03-20 08:57:24.354804069 +0000 UTC m=+7676.614115210" lastFinishedPulling="2026-03-20 08:57:28.925022144 +0000 UTC m=+7681.184333295" observedRunningTime="2026-03-20 08:57:29.434058405 +0000 UTC m=+7681.693369556" watchObservedRunningTime="2026-03-20 08:57:29.442544897 +0000 UTC m=+7681.701856048" Mar 20 08:57:32 crc kubenswrapper[5136]: I0320 08:57:32.931293 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:32 crc kubenswrapper[5136]: I0320 08:57:32.931805 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:57:33 crc kubenswrapper[5136]: I0320 08:57:33.977291 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:57:33 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:57:33 crc kubenswrapper[5136]: > Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.950751 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx"] Mar 20 08:57:37 crc kubenswrapper[5136]: E0320 08:57:37.951526 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="util" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951538 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="util" Mar 20 08:57:37 crc kubenswrapper[5136]: E0320 08:57:37.951574 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="extract" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951581 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="extract" Mar 20 08:57:37 crc kubenswrapper[5136]: E0320 08:57:37.951589 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="pull" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951595 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="pull" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.951757 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="380bd027-6e4d-49b8-af6b-db5cd8b06635" containerName="extract" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.952365 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.962735 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.963257 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-m76t5" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.963390 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 20 08:57:37 crc kubenswrapper[5136]: I0320 08:57:37.975129 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.147498 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzj2k\" (UniqueName: \"kubernetes.io/projected/b1998fd9-5100-4819-83d9-61c453df2121-kube-api-access-vzj2k\") pod \"obo-prometheus-operator-8ff7d675-pw7kx\" (UID: \"b1998fd9-5100-4819-83d9-61c453df2121\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.249746 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzj2k\" (UniqueName: \"kubernetes.io/projected/b1998fd9-5100-4819-83d9-61c453df2121-kube-api-access-vzj2k\") pod \"obo-prometheus-operator-8ff7d675-pw7kx\" (UID: \"b1998fd9-5100-4819-83d9-61c453df2121\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.280619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzj2k\" (UniqueName: \"kubernetes.io/projected/b1998fd9-5100-4819-83d9-61c453df2121-kube-api-access-vzj2k\") pod \"obo-prometheus-operator-8ff7d675-pw7kx\" (UID: \"b1998fd9-5100-4819-83d9-61c453df2121\") " pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.346917 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.349035 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.352707 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.353438 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-qcbcv" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.373115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.387103 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.401872 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.460632 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg"] Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.468208 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.468807 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571030 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571283 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571356 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.571561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.578275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.578352 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.578398 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6e3b7b66-720f-451e-b76c-d14672876450-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k\" (UID: \"6e3b7b66-720f-451e-b76c-d14672876450\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.677870 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.678215 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.689711 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.710226 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.711474 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e648d436-8985-4d18-83b2-8401e5e3b301-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg\" (UID: \"e648d436-8985-4d18-83b2-8401e5e3b301\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:38 crc kubenswrapper[5136]: I0320 08:57:38.731457 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.005616 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-7pqgh"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.006946 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.019241 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-nktsr" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.019412 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.034671 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-7pqgh"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.088608 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt6gf\" (UniqueName: \"kubernetes.io/projected/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-kube-api-access-dt6gf\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.088678 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.190892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt6gf\" (UniqueName: \"kubernetes.io/projected/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-kube-api-access-dt6gf\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.190967 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.199790 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-observability-operator-tls\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.208146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt6gf\" (UniqueName: \"kubernetes.io/projected/cbf95789-daee-44bb-9d6a-a5b503c0b1e1-kube-api-access-dt6gf\") pod \"observability-operator-6dd7dd855f-7pqgh\" (UID: \"cbf95789-daee-44bb-9d6a-a5b503c0b1e1\") " pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.338430 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.366898 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.369980 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.397334 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:39 crc kubenswrapper[5136]: E0320 08:57:39.397594 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.541010 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" event={"ID":"6e3b7b66-720f-451e-b76c-d14672876450","Type":"ContainerStarted","Data":"cea3ed687a99dec171352d1ab2b937889fe1bed3a13c96dde59c4ecdbf42ad9e"} Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.543675 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" event={"ID":"e648d436-8985-4d18-83b2-8401e5e3b301","Type":"ContainerStarted","Data":"a33dccb87d830c68fee4d3ea84fa0760f6c1fc50057aa94fe495b825fcf8647b"} Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.621285 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.725327 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-7979496b84-bg2n6"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.726784 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.734576 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4jp76" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.734778 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-service-cert" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.750868 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-7979496b84-bg2n6"] Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809040 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e3c2d08-6905-419d-a0d6-f4935119b632-openshift-service-ca\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809189 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-webhook-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809299 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-apiservice-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.809371 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crm5b\" (UniqueName: \"kubernetes.io/projected/0e3c2d08-6905-419d-a0d6-f4935119b632-kube-api-access-crm5b\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913272 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-webhook-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913414 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-apiservice-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913481 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crm5b\" (UniqueName: \"kubernetes.io/projected/0e3c2d08-6905-419d-a0d6-f4935119b632-kube-api-access-crm5b\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.913522 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e3c2d08-6905-419d-a0d6-f4935119b632-openshift-service-ca\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.914617 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/0e3c2d08-6905-419d-a0d6-f4935119b632-openshift-service-ca\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.919541 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-webhook-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.922451 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0e3c2d08-6905-419d-a0d6-f4935119b632-apiservice-cert\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:39 crc kubenswrapper[5136]: I0320 08:57:39.942494 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crm5b\" (UniqueName: \"kubernetes.io/projected/0e3c2d08-6905-419d-a0d6-f4935119b632-kube-api-access-crm5b\") pod \"perses-operator-7979496b84-bg2n6\" (UID: \"0e3c2d08-6905-419d-a0d6-f4935119b632\") " pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.007682 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-6dd7dd855f-7pqgh"] Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.049778 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.102085 5136 scope.go:117] "RemoveContainer" containerID="4b1f554c7f496a2460aeebf430477f38851975d2eafc2fb7735f082f6ef9d928" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.134597 5136 scope.go:117] "RemoveContainer" containerID="340051113d29f7efbc695a906b0061f19d9714ea49c2643f84b952194ea4de4c" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.178565 5136 scope.go:117] "RemoveContainer" containerID="87d4064c210f2c8ecf2546f67dd8fe9ef436d4f291209d0fa6a7f5ba97b6e5e4" Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.566948 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" event={"ID":"b1998fd9-5100-4819-83d9-61c453df2121","Type":"ContainerStarted","Data":"eacfb59452e84ab3eb8207170515d7e79a24614d7f794cbdecbcda76f8c132bc"} Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.569184 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" event={"ID":"cbf95789-daee-44bb-9d6a-a5b503c0b1e1","Type":"ContainerStarted","Data":"dc8dea89f94e20db496cc3471f721a6dd4cc4151eab5e1b584cdcc8555c0394d"} Mar 20 08:57:40 crc kubenswrapper[5136]: I0320 08:57:40.581165 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-7979496b84-bg2n6"] Mar 20 08:57:40 crc kubenswrapper[5136]: W0320 08:57:40.587038 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e3c2d08_6905_419d_a0d6_f4935119b632.slice/crio-1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594 WatchSource:0}: Error finding container 1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594: Status 404 returned error can't find the container with id 1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594 Mar 20 08:57:41 crc kubenswrapper[5136]: I0320 08:57:41.589756 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-7979496b84-bg2n6" event={"ID":"0e3c2d08-6905-419d-a0d6-f4935119b632","Type":"ContainerStarted","Data":"1e050de244828ed81a807be1e33f55a100599a996fc71d325d105a7b4b35d594"} Mar 20 08:57:42 crc kubenswrapper[5136]: I0320 08:57:42.601863 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" event={"ID":"6e3b7b66-720f-451e-b76c-d14672876450","Type":"ContainerStarted","Data":"e7855e40b73948ba4b2fb8bca623cf38e57459a85f6f13ef40af1dfa854e6ffb"} Mar 20 08:57:42 crc kubenswrapper[5136]: I0320 08:57:42.630678 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k" podStartSLOduration=1.873369108 podStartE2EDuration="4.630656355s" podCreationTimestamp="2026-03-20 08:57:38 +0000 UTC" firstStartedPulling="2026-03-20 08:57:39.383219373 +0000 UTC m=+7691.642530524" lastFinishedPulling="2026-03-20 08:57:42.14050661 +0000 UTC m=+7694.399817771" observedRunningTime="2026-03-20 08:57:42.621129279 +0000 UTC m=+7694.880440430" watchObservedRunningTime="2026-03-20 08:57:42.630656355 +0000 UTC m=+7694.889967506" Mar 20 08:57:43 crc kubenswrapper[5136]: I0320 08:57:43.615741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" event={"ID":"e648d436-8985-4d18-83b2-8401e5e3b301","Type":"ContainerStarted","Data":"e1460726678122174e54c7789772e2cae030b4d250d808aa887561a2f704d46e"} Mar 20 08:57:43 crc kubenswrapper[5136]: I0320 08:57:43.641252 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg" podStartSLOduration=2.880248452 podStartE2EDuration="5.641232523s" podCreationTimestamp="2026-03-20 08:57:38 +0000 UTC" firstStartedPulling="2026-03-20 08:57:39.38539235 +0000 UTC m=+7691.644703501" lastFinishedPulling="2026-03-20 08:57:42.146376411 +0000 UTC m=+7694.405687572" observedRunningTime="2026-03-20 08:57:43.633142442 +0000 UTC m=+7695.892453593" watchObservedRunningTime="2026-03-20 08:57:43.641232523 +0000 UTC m=+7695.900543674" Mar 20 08:57:43 crc kubenswrapper[5136]: I0320 08:57:43.989519 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:57:43 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:57:43 crc kubenswrapper[5136]: > Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.655502 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" event={"ID":"b1998fd9-5100-4819-83d9-61c453df2121","Type":"ContainerStarted","Data":"d13ac4f372725086a73d27ac54f5430b9e6d28970d2833f5045e2b3be8f55e00"} Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.657384 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-7979496b84-bg2n6" event={"ID":"0e3c2d08-6905-419d-a0d6-f4935119b632","Type":"ContainerStarted","Data":"8e071dd5716df73fb0d84e99d16769e4f32df706d116acd68d97f41d0f25e202"} Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.657516 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.678186 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-8ff7d675-pw7kx" podStartSLOduration=3.975694777 podStartE2EDuration="9.678167537s" podCreationTimestamp="2026-03-20 08:57:37 +0000 UTC" firstStartedPulling="2026-03-20 08:57:39.621612564 +0000 UTC m=+7691.880923715" lastFinishedPulling="2026-03-20 08:57:45.324085324 +0000 UTC m=+7697.583396475" observedRunningTime="2026-03-20 08:57:46.67730154 +0000 UTC m=+7698.936612691" watchObservedRunningTime="2026-03-20 08:57:46.678167537 +0000 UTC m=+7698.937478688" Mar 20 08:57:46 crc kubenswrapper[5136]: I0320 08:57:46.702679 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-7979496b84-bg2n6" podStartSLOduration=2.969804684 podStartE2EDuration="7.702656995s" podCreationTimestamp="2026-03-20 08:57:39 +0000 UTC" firstStartedPulling="2026-03-20 08:57:40.588702385 +0000 UTC m=+7692.848013536" lastFinishedPulling="2026-03-20 08:57:45.321554696 +0000 UTC m=+7697.580865847" observedRunningTime="2026-03-20 08:57:46.696173795 +0000 UTC m=+7698.955484936" watchObservedRunningTime="2026-03-20 08:57:46.702656995 +0000 UTC m=+7698.961968146" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.051954 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-7979496b84-bg2n6" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.703004 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" event={"ID":"cbf95789-daee-44bb-9d6a-a5b503c0b1e1","Type":"ContainerStarted","Data":"d6d4627b7cc4555652fe786bd2e3248b1091ec276f5b7264a298618c4e4db0c8"} Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.703550 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.705156 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" Mar 20 08:57:50 crc kubenswrapper[5136]: I0320 08:57:50.723009 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-6dd7dd855f-7pqgh" podStartSLOduration=3.037612434 podStartE2EDuration="12.722960305s" podCreationTimestamp="2026-03-20 08:57:38 +0000 UTC" firstStartedPulling="2026-03-20 08:57:40.049575734 +0000 UTC m=+7692.308886885" lastFinishedPulling="2026-03-20 08:57:49.734923605 +0000 UTC m=+7701.994234756" observedRunningTime="2026-03-20 08:57:50.719022734 +0000 UTC m=+7702.978333885" watchObservedRunningTime="2026-03-20 08:57:50.722960305 +0000 UTC m=+7702.982271456" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.275609 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.276091 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" containerID="cri-o://9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e" gracePeriod=2 Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.307483 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.346506 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.347084 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.347106 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.347404 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f874f73-4453-44c8-b1d9-52559489bead" containerName="openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.348240 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.357380 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="8f874f73-4453-44c8-b1d9-52559489bead" podUID="602b2568-b048-42d1-afbd-b20ebe8e7869" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.358420 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.374507 5136 status_manager.go:875] "Failed to update status for pod" pod="openstack/openstackclient" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"602b2568-b048-42d1-afbd-b20ebe8e7869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-20T08:57:53Z\\\",\\\"message\\\":\\\"containers with unready status: [openstackclient]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.rdoproject.org/podified-antelope-centos9/openstack-openstackclient:39fc4cb70f516d8e9b48225bc0a253ef\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"openstackclient\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/clouds.yaml\\\",\\\"name\\\":\\\"openstack-config\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/.config/openstack/secure.yaml\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/home/cloud-admin/cloudrc\\\",\\\"name\\\":\\\"openstack-config-secret\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem\\\",\\\"name\\\":\\\"combined-ca-bundle\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5x9p4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-03-20T08:57:53Z\\\"}}\" for pod \"openstack\"/\"openstackclient\": pods \"openstackclient\" not found" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.391880 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.392811 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle kube-api-access-5x9p4 openstack-config openstack-config-secret], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/openstackclient" podUID="602b2568-b048-42d1-afbd-b20ebe8e7869" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.403510 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.403770 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.413711 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421216 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421272 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x9p4\" (UniqueName: \"kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421383 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.421428 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.429642 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.431530 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.443202 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.485881 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524426 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524612 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524739 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.524970 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.525032 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5x9p4\" (UniqueName: \"kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.525129 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.525183 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.526976 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.528324 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.530064 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.535314 5136 projected.go:194] Error preparing data for projected volume kube-api-access-5x9p4 for pod openstack/openstackclient: failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (602b2568-b048-42d1-afbd-b20ebe8e7869) does not match the UID in record. The object might have been deleted and then recreated Mar 20 08:57:53 crc kubenswrapper[5136]: E0320 08:57:53.556631 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4 podName:602b2568-b048-42d1-afbd-b20ebe8e7869 nodeName:}" failed. No retries permitted until 2026-03-20 08:57:54.056605005 +0000 UTC m=+7706.315916156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5x9p4" (UniqueName: "kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4") pod "openstackclient" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869") : failed to fetch token: serviceaccounts "openstackclient-openstackclient" is forbidden: the UID in the bound object reference (602b2568-b048-42d1-afbd-b20ebe8e7869) does not match the UID in record. The object might have been deleted and then recreated Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.535393 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-dmhl7" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.557802 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.559322 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.564363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"openstackclient\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634234 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634359 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634384 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634419 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"kube-state-metrics-0\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.634455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.635935 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.639350 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.642359 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.665955 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"openstackclient\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.736506 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"kube-state-metrics-0\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.770023 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.777288 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.777699 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.781846 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"kube-state-metrics-0\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " pod="openstack/kube-state-metrics-0" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.846723 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.860689 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.944937 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") pod \"602b2568-b048-42d1-afbd-b20ebe8e7869\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.945074 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") pod \"602b2568-b048-42d1-afbd-b20ebe8e7869\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.945092 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") pod \"602b2568-b048-42d1-afbd-b20ebe8e7869\" (UID: \"602b2568-b048-42d1-afbd-b20ebe8e7869\") " Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.945616 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5x9p4\" (UniqueName: \"kubernetes.io/projected/602b2568-b048-42d1-afbd-b20ebe8e7869-kube-api-access-5x9p4\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.957059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "602b2568-b048-42d1-afbd-b20ebe8e7869" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.963377 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "602b2568-b048-42d1-afbd-b20ebe8e7869" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:53 crc kubenswrapper[5136]: I0320 08:57:53.969110 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "602b2568-b048-42d1-afbd-b20ebe8e7869" (UID: "602b2568-b048-42d1-afbd-b20ebe8e7869"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.000041 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:57:54 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:57:54 crc kubenswrapper[5136]: > Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.050085 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.050126 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/602b2568-b048-42d1-afbd-b20ebe8e7869-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.050135 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/602b2568-b048-42d1-afbd-b20ebe8e7869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.064454 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.457236 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602b2568-b048-42d1-afbd-b20ebe8e7869" path="/var/lib/kubelet/pods/602b2568-b048-42d1-afbd-b20ebe8e7869/volumes" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.464596 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.483080 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514057 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514341 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-jqddc" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514483 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514592 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.514689 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574296 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574436 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574487 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574563 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574580 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.574650 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.619035 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676237 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676294 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676314 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676445 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676497 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676529 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.676586 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.680297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.685467 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.714598 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.714948 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.719522 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.719997 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.748683 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"alertmanager-metric-storage-0\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.799330 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.808800 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.827371 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:54 crc kubenswrapper[5136]: I0320 08:57:54.917661 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="602b2568-b048-42d1-afbd-b20ebe8e7869" podUID="85e488a7-477c-4368-a461-725ccdc6987e" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.012755 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.103712 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.634095 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.636971 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.643929 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.646671 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.647081 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.648127 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.648162 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.648999 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jt99d" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.649122 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.651929 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.669046 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723058 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723174 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723245 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723268 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723321 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723348 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723394 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723491 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.723523 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.816135 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerStarted","Data":"6ff5903a5cc26ba52c8004bed71ec6d792235b4e0088d5c041ffe70d1e0d7e6a"} Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.817738 5136 generic.go:334] "Generic (PLEG): container finished" podID="8f874f73-4453-44c8-b1d9-52559489bead" containerID="9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e" exitCode=137 Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825512 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825551 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825598 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825626 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825663 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825721 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825739 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825762 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825789 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.825832 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.827361 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.827891 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.828889 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.841645 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.843157 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.843189 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad804f31e72686a367ca365b9ecb0de79de25c176ca5187be7f65bd43ec38926/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.854823 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85e488a7-477c-4368-a461-725ccdc6987e","Type":"ContainerStarted","Data":"affe30d23ee8aa4f7e6226db1cbd64d0ecf498b5e2794b62dd6f4aa04fc27719"} Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.883419 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.883951 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.884498 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.884850 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.888919 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.895580 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.975568 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:55 crc kubenswrapper[5136]: I0320 08:57:55.988901 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.168356 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.351563 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.351941 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.352125 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.352203 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") pod \"8f874f73-4453-44c8-b1d9-52559489bead\" (UID: \"8f874f73-4453-44c8-b1d9-52559489bead\") " Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.362087 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7" (OuterVolumeSpecName: "kube-api-access-r8ff7") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "kube-api-access-r8ff7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.441894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.454733 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.454760 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8ff7\" (UniqueName: \"kubernetes.io/projected/8f874f73-4453-44c8-b1d9-52559489bead-kube-api-access-r8ff7\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.463350 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.556763 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.739089 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "8f874f73-4453-44c8-b1d9-52559489bead" (UID: "8f874f73-4453-44c8-b1d9-52559489bead"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.774766 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.777620 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/8f874f73-4453-44c8-b1d9-52559489bead-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:57:56 crc kubenswrapper[5136]: W0320 08:57:56.792718 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f27cf85_7e28_48aa_b93a_0647c59a7dcc.slice/crio-4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30 WatchSource:0}: Error finding container 4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30: Status 404 returned error can't find the container with id 4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30 Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.904029 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"b199eb0de00dd4b5665ed78ec596b43fb8d24fc2002bd3f4dd356a32c51b4138"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.906355 5136 scope.go:117] "RemoveContainer" containerID="9bbc0f5018299d8801809b126e8536554b83592717e71efd04c53bc88080264e" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.906505 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.927456 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"85e488a7-477c-4368-a461-725ccdc6987e","Type":"ContainerStarted","Data":"c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.931411 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.948551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerStarted","Data":"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76"} Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.949042 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.950279 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=3.950260944 podStartE2EDuration="3.950260944s" podCreationTimestamp="2026-03-20 08:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:57:56.943449463 +0000 UTC m=+7709.202760624" watchObservedRunningTime="2026-03-20 08:57:56.950260944 +0000 UTC m=+7709.209572095" Mar 20 08:57:56 crc kubenswrapper[5136]: I0320 08:57:56.980678 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.4246009490000002 podStartE2EDuration="3.980652845s" podCreationTimestamp="2026-03-20 08:57:53 +0000 UTC" firstStartedPulling="2026-03-20 08:57:55.084607103 +0000 UTC m=+7707.343918254" lastFinishedPulling="2026-03-20 08:57:55.640658999 +0000 UTC m=+7707.899970150" observedRunningTime="2026-03-20 08:57:56.971569274 +0000 UTC m=+7709.230880425" watchObservedRunningTime="2026-03-20 08:57:56.980652845 +0000 UTC m=+7709.239963996" Mar 20 08:57:58 crc kubenswrapper[5136]: I0320 08:57:58.411080 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f874f73-4453-44c8-b1d9-52559489bead" path="/var/lib/kubelet/pods/8f874f73-4453-44c8-b1d9-52559489bead/volumes" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.153990 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.155960 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.162191 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.162769 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.162938 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.174855 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.282139 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"auto-csr-approver-29566618-mxdtq\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.384105 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"auto-csr-approver-29566618-mxdtq\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.406063 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"auto-csr-approver-29566618-mxdtq\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:00 crc kubenswrapper[5136]: I0320 08:58:00.510563 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.008381 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 08:58:01 crc kubenswrapper[5136]: W0320 08:58:01.015928 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b36af1_10a6_412b_a488_892560533fbc.slice/crio-848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f WatchSource:0}: Error finding container 848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f: Status 404 returned error can't find the container with id 848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.455868 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.457852 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.476019 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.611775 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-utilities\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.612085 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46xbq\" (UniqueName: \"kubernetes.io/projected/221d005e-2b68-4835-9bcc-69b3d391e37f-kube-api-access-46xbq\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.612170 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-catalog-content\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.714361 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-utilities\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.714438 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46xbq\" (UniqueName: \"kubernetes.io/projected/221d005e-2b68-4835-9bcc-69b3d391e37f-kube-api-access-46xbq\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.714463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-catalog-content\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.715021 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-utilities\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.715034 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/221d005e-2b68-4835-9bcc-69b3d391e37f-catalog-content\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.736307 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46xbq\" (UniqueName: \"kubernetes.io/projected/221d005e-2b68-4835-9bcc-69b3d391e37f-kube-api-access-46xbq\") pod \"community-operators-lbsbr\" (UID: \"221d005e-2b68-4835-9bcc-69b3d391e37f\") " pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:01 crc kubenswrapper[5136]: I0320 08:58:01.774364 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:02 crc kubenswrapper[5136]: I0320 08:58:02.089361 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerStarted","Data":"848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f"} Mar 20 08:58:02 crc kubenswrapper[5136]: I0320 08:58:02.327330 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:02 crc kubenswrapper[5136]: W0320 08:58:02.385304 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod221d005e_2b68_4835_9bcc_69b3d391e37f.slice/crio-8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24 WatchSource:0}: Error finding container 8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24: Status 404 returned error can't find the container with id 8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24 Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.103524 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerStarted","Data":"95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.106502 5136 generic.go:334] "Generic (PLEG): container finished" podID="221d005e-2b68-4835-9bcc-69b3d391e37f" containerID="ae12d8645e0d7ade8d1cd9301e6c976cb936fab5f6814efd35ed9b9d05ea4df5" exitCode=0 Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.106551 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerDied","Data":"ae12d8645e0d7ade8d1cd9301e6c976cb936fab5f6814efd35ed9b9d05ea4df5"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.106617 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerStarted","Data":"8ac3373187bdeaa0234df40ace0e547f666a457b3d45d5370978b78068db5a24"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.109442 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.113436 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa"} Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.131268 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" podStartSLOduration=2.08990668 podStartE2EDuration="3.13124259s" podCreationTimestamp="2026-03-20 08:58:00 +0000 UTC" firstStartedPulling="2026-03-20 08:58:01.018630503 +0000 UTC m=+7713.277941654" lastFinishedPulling="2026-03-20 08:58:02.059966413 +0000 UTC m=+7714.319277564" observedRunningTime="2026-03-20 08:58:03.119603189 +0000 UTC m=+7715.378914340" watchObservedRunningTime="2026-03-20 08:58:03.13124259 +0000 UTC m=+7715.390553761" Mar 20 08:58:03 crc kubenswrapper[5136]: I0320 08:58:03.985357 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:58:03 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:58:03 crc kubenswrapper[5136]: > Mar 20 08:58:04 crc kubenswrapper[5136]: I0320 08:58:04.071786 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 08:58:04 crc kubenswrapper[5136]: I0320 08:58:04.126415 5136 generic.go:334] "Generic (PLEG): container finished" podID="c9b36af1-10a6-412b-a488-892560533fbc" containerID="95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc" exitCode=0 Mar 20 08:58:04 crc kubenswrapper[5136]: I0320 08:58:04.126493 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerDied","Data":"95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc"} Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.582499 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.619283 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") pod \"c9b36af1-10a6-412b-a488-892560533fbc\" (UID: \"c9b36af1-10a6-412b-a488-892560533fbc\") " Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.629331 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf" (OuterVolumeSpecName: "kube-api-access-zdjjf") pod "c9b36af1-10a6-412b-a488-892560533fbc" (UID: "c9b36af1-10a6-412b-a488-892560533fbc"). InnerVolumeSpecName "kube-api-access-zdjjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:05 crc kubenswrapper[5136]: I0320 08:58:05.721959 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdjjf\" (UniqueName: \"kubernetes.io/projected/c9b36af1-10a6-412b-a488-892560533fbc-kube-api-access-zdjjf\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.148173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" event={"ID":"c9b36af1-10a6-412b-a488-892560533fbc","Type":"ContainerDied","Data":"848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f"} Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.148219 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="848fa253976413937eb1ffa1e1d766d564070378f11cdf63fa88656afded2d3f" Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.148229 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566618-mxdtq" Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.250587 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.262171 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566612-gmnm6"] Mar 20 08:58:06 crc kubenswrapper[5136]: I0320 08:58:06.406458 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="474fd165-50ec-4d02-9f52-eb18382cee27" path="/var/lib/kubelet/pods/474fd165-50ec-4d02-9f52-eb18382cee27/volumes" Mar 20 08:58:07 crc kubenswrapper[5136]: I0320 08:58:07.164436 5136 generic.go:334] "Generic (PLEG): container finished" podID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerID="a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa" exitCode=0 Mar 20 08:58:07 crc kubenswrapper[5136]: I0320 08:58:07.164477 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa"} Mar 20 08:58:08 crc kubenswrapper[5136]: I0320 08:58:08.181418 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972" exitCode=0 Mar 20 08:58:08 crc kubenswrapper[5136]: I0320 08:58:08.181460 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972"} Mar 20 08:58:08 crc kubenswrapper[5136]: I0320 08:58:08.405225 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:08 crc kubenswrapper[5136]: E0320 08:58:08.405478 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:09 crc kubenswrapper[5136]: I0320 08:58:09.194679 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerStarted","Data":"c065757c937f22da9e8b7667e94c1c8c670b21bd25f6581d8d73ba92f6d8453a"} Mar 20 08:58:10 crc kubenswrapper[5136]: I0320 08:58:10.211198 5136 generic.go:334] "Generic (PLEG): container finished" podID="221d005e-2b68-4835-9bcc-69b3d391e37f" containerID="c065757c937f22da9e8b7667e94c1c8c670b21bd25f6581d8d73ba92f6d8453a" exitCode=0 Mar 20 08:58:10 crc kubenswrapper[5136]: I0320 08:58:10.211239 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerDied","Data":"c065757c937f22da9e8b7667e94c1c8c670b21bd25f6581d8d73ba92f6d8453a"} Mar 20 08:58:11 crc kubenswrapper[5136]: I0320 08:58:11.225489 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4"} Mar 20 08:58:12 crc kubenswrapper[5136]: I0320 08:58:12.237684 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lbsbr" event={"ID":"221d005e-2b68-4835-9bcc-69b3d391e37f","Type":"ContainerStarted","Data":"066ba98012246429f2094fb53db95c9626683ff7906a87b660df9b50aa1b5b85"} Mar 20 08:58:13 crc kubenswrapper[5136]: I0320 08:58:13.978618 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" probeResult="failure" output=< Mar 20 08:58:13 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 08:58:13 crc kubenswrapper[5136]: > Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.259436 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerStarted","Data":"c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a"} Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.259729 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.263439 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.286488 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lbsbr" podStartSLOduration=5.306366661 podStartE2EDuration="13.286463538s" podCreationTimestamp="2026-03-20 08:58:01 +0000 UTC" firstStartedPulling="2026-03-20 08:58:03.108498625 +0000 UTC m=+7715.367809776" lastFinishedPulling="2026-03-20 08:58:11.088595502 +0000 UTC m=+7723.347906653" observedRunningTime="2026-03-20 08:58:12.25748904 +0000 UTC m=+7724.516800191" watchObservedRunningTime="2026-03-20 08:58:14.286463538 +0000 UTC m=+7726.545774689" Mar 20 08:58:14 crc kubenswrapper[5136]: I0320 08:58:14.288163 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=5.440450972 podStartE2EDuration="20.288154731s" podCreationTimestamp="2026-03-20 08:57:54 +0000 UTC" firstStartedPulling="2026-03-20 08:57:55.886941764 +0000 UTC m=+7708.146252915" lastFinishedPulling="2026-03-20 08:58:10.734645523 +0000 UTC m=+7722.993956674" observedRunningTime="2026-03-20 08:58:14.282883918 +0000 UTC m=+7726.542195069" watchObservedRunningTime="2026-03-20 08:58:14.288154731 +0000 UTC m=+7726.547465882" Mar 20 08:58:19 crc kubenswrapper[5136]: I0320 08:58:19.307772 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319"} Mar 20 08:58:20 crc kubenswrapper[5136]: I0320 08:58:20.398505 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:20 crc kubenswrapper[5136]: E0320 08:58:20.399031 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.329127 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f"} Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.775244 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.775986 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:21 crc kubenswrapper[5136]: I0320 08:58:21.822495 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.386289 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lbsbr" Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.467139 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lbsbr"] Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.511379 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.511613 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dmmv5" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" containerID="cri-o://433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" gracePeriod=2 Mar 20 08:58:22 crc kubenswrapper[5136]: I0320 08:58:22.989670 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.060449 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.133348 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.210239 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") pod \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.210423 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") pod \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.210543 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") pod \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\" (UID: \"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b\") " Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.211229 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities" (OuterVolumeSpecName: "utilities") pod "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" (UID: "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.223110 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b" (OuterVolumeSpecName: "kube-api-access-92f6b") pod "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" (UID: "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b"). InnerVolumeSpecName "kube-api-access-92f6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.295482 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" (UID: "bd7d7add-fc30-4efd-96dc-b253a6fd1b8b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.312723 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92f6b\" (UniqueName: \"kubernetes.io/projected/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-kube-api-access-92f6b\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.312756 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.312765 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.352463 5136 generic.go:334] "Generic (PLEG): container finished" podID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" exitCode=0 Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.352911 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b"} Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.353000 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dmmv5" event={"ID":"bd7d7add-fc30-4efd-96dc-b253a6fd1b8b","Type":"ContainerDied","Data":"cffe9a0608630e68ebe445b4b73ca250c67588ca57233ed2a5a8f8aeafc8a8ef"} Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.352998 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dmmv5" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.353047 5136 scope.go:117] "RemoveContainer" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.387311 5136 scope.go:117] "RemoveContainer" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.394605 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.403830 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dmmv5"] Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.415716 5136 scope.go:117] "RemoveContainer" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.459368 5136 scope.go:117] "RemoveContainer" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" Mar 20 08:58:23 crc kubenswrapper[5136]: E0320 08:58:23.459847 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b\": container with ID starting with 433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b not found: ID does not exist" containerID="433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.459889 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b"} err="failed to get container status \"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b\": rpc error: code = NotFound desc = could not find container \"433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b\": container with ID starting with 433d8d118a822ff97efcfc1c8de93d44c32a5d6b9bf13a2db4bb7a4aa4753a0b not found: ID does not exist" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.459913 5136 scope.go:117] "RemoveContainer" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" Mar 20 08:58:23 crc kubenswrapper[5136]: E0320 08:58:23.460428 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd\": container with ID starting with 72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd not found: ID does not exist" containerID="72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.460447 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd"} err="failed to get container status \"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd\": rpc error: code = NotFound desc = could not find container \"72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd\": container with ID starting with 72560082fe8c2073a778c54fa6e7b3019318a9079881095b3a52834527633ecd not found: ID does not exist" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.460460 5136 scope.go:117] "RemoveContainer" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" Mar 20 08:58:23 crc kubenswrapper[5136]: E0320 08:58:23.460694 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677\": container with ID starting with 360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677 not found: ID does not exist" containerID="360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677" Mar 20 08:58:23 crc kubenswrapper[5136]: I0320 08:58:23.460724 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677"} err="failed to get container status \"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677\": rpc error: code = NotFound desc = could not find container \"360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677\": container with ID starting with 360129e0e8a017b4f0c343c930bc4a4f1e5b8b9f8a16f082a9c17bc91e39b677 not found: ID does not exist" Mar 20 08:58:24 crc kubenswrapper[5136]: I0320 08:58:24.407033 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" path="/var/lib/kubelet/pods/bd7d7add-fc30-4efd-96dc-b253a6fd1b8b/volumes" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.273770 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.274261 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w4zdj" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" containerID="cri-o://a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" gracePeriod=2 Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.387565 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerStarted","Data":"900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed"} Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.414932 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.276671029 podStartE2EDuration="31.414914058s" podCreationTimestamp="2026-03-20 08:57:54 +0000 UTC" firstStartedPulling="2026-03-20 08:57:56.814615314 +0000 UTC m=+7709.073926465" lastFinishedPulling="2026-03-20 08:58:24.952858343 +0000 UTC m=+7737.212169494" observedRunningTime="2026-03-20 08:58:25.409291604 +0000 UTC m=+7737.668602755" watchObservedRunningTime="2026-03-20 08:58:25.414914058 +0000 UTC m=+7737.674225209" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.802898 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.860107 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") pod \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.860208 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") pod \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.860267 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") pod \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\" (UID: \"fbbd891d-c8eb-404c-8255-2a3bba4035ee\") " Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.861039 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities" (OuterVolumeSpecName: "utilities") pod "fbbd891d-c8eb-404c-8255-2a3bba4035ee" (UID: "fbbd891d-c8eb-404c-8255-2a3bba4035ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.865799 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89" (OuterVolumeSpecName: "kube-api-access-ncj89") pod "fbbd891d-c8eb-404c-8255-2a3bba4035ee" (UID: "fbbd891d-c8eb-404c-8255-2a3bba4035ee"). InnerVolumeSpecName "kube-api-access-ncj89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.962401 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncj89\" (UniqueName: \"kubernetes.io/projected/fbbd891d-c8eb-404c-8255-2a3bba4035ee-kube-api-access-ncj89\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.962442 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.980528 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbbd891d-c8eb-404c-8255-2a3bba4035ee" (UID: "fbbd891d-c8eb-404c-8255-2a3bba4035ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.990550 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.990594 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:25 crc kubenswrapper[5136]: I0320 08:58:25.993390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.064343 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbbd891d-c8eb-404c-8255-2a3bba4035ee-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.397545 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" exitCode=0 Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.397652 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w4zdj" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.406701 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1"} Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.409026 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w4zdj" event={"ID":"fbbd891d-c8eb-404c-8255-2a3bba4035ee","Type":"ContainerDied","Data":"b8ea34d57154026d7acfa78afb0581c846280208e141611a13e5cd82670eae89"} Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.409274 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.409114 5136 scope.go:117] "RemoveContainer" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.455674 5136 scope.go:117] "RemoveContainer" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.475409 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.527725 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w4zdj"] Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.531961 5136 scope.go:117] "RemoveContainer" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.562696 5136 scope.go:117] "RemoveContainer" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" Mar 20 08:58:26 crc kubenswrapper[5136]: E0320 08:58:26.563371 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1\": container with ID starting with a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1 not found: ID does not exist" containerID="a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.563401 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1"} err="failed to get container status \"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1\": rpc error: code = NotFound desc = could not find container \"a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1\": container with ID starting with a3549e454b3af8bca8a21ce48c0f40eeba05a452e41f41e4fd9ab1e221d1c0b1 not found: ID does not exist" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.563428 5136 scope.go:117] "RemoveContainer" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" Mar 20 08:58:26 crc kubenswrapper[5136]: E0320 08:58:26.568127 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515\": container with ID starting with 9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515 not found: ID does not exist" containerID="9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.568172 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515"} err="failed to get container status \"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515\": rpc error: code = NotFound desc = could not find container \"9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515\": container with ID starting with 9094788af2a605670849104ee4d6adee74372e1ebb33687ad51ef917b6de5515 not found: ID does not exist" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.568199 5136 scope.go:117] "RemoveContainer" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" Mar 20 08:58:26 crc kubenswrapper[5136]: E0320 08:58:26.568613 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd\": container with ID starting with 3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd not found: ID does not exist" containerID="3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd" Mar 20 08:58:26 crc kubenswrapper[5136]: I0320 08:58:26.568648 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd"} err="failed to get container status \"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd\": rpc error: code = NotFound desc = could not find container \"3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd\": container with ID starting with 3f9934810b0bc967208152c81856a6359b11543f4d527a5e931b126e5e0d7efd not found: ID does not exist" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.193563 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.194160 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" containerID="cri-o://c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e" gracePeriod=2 Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.212465 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.223943 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224430 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224449 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224461 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224467 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224474 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224481 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224495 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224501 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224511 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b36af1-10a6-412b-a488-892560533fbc" containerName="oc" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224517 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b36af1-10a6-412b-a488-892560533fbc" containerName="oc" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224527 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224533 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-utilities" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224546 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224552 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: E0320 08:58:28.224572 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224578 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="extract-content" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224765 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224779 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="85e488a7-477c-4368-a461-725ccdc6987e" containerName="openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224789 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b36af1-10a6-412b-a488-892560533fbc" containerName="oc" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.224799 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7d7add-fc30-4efd-96dc-b253a6fd1b8b" containerName="registry-server" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.225781 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.236382 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.254173 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.306957 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.307002 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.307042 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.307499 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408684 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408809 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.408899 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.409794 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.411023 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbbd891d-c8eb-404c-8255-2a3bba4035ee" path="/var/lib/kubelet/pods/fbbd891d-c8eb-404c-8255-2a3bba4035ee/volumes" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.414532 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.419564 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.432502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"openstackclient\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " pod="openstack/openstackclient" Mar 20 08:58:28 crc kubenswrapper[5136]: I0320 08:58:28.547170 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.055017 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Mar 20 08:58:29 crc kubenswrapper[5136]: W0320 08:58:29.055284 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cefd58c_a889_4893_aa87_b106eae1c7ad.slice/crio-27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c WatchSource:0}: Error finding container 27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c: Status 404 returned error can't find the container with id 27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.429725 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9cefd58c-a889-4893-aa87-b106eae1c7ad","Type":"ContainerStarted","Data":"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423"} Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.430059 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"9cefd58c-a889-4893-aa87-b106eae1c7ad","Type":"ContainerStarted","Data":"27f5cb228076724219a2f0fe834d5acfadd46e315bf55e73db187a6092a16d5c"} Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.447734 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=1.447714836 podStartE2EDuration="1.447714836s" podCreationTimestamp="2026-03-20 08:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:29.444676811 +0000 UTC m=+7741.703987962" watchObservedRunningTime="2026-03-20 08:58:29.447714836 +0000 UTC m=+7741.707025987" Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.823700 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.824002 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" containerID="cri-o://c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319" gracePeriod=600 Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.824160 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" containerID="cri-o://900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed" gracePeriod=600 Mar 20 08:58:29 crc kubenswrapper[5136]: I0320 08:58:29.824160 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" containerID="cri-o://4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f" gracePeriod=600 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.439671 5136 generic.go:334] "Generic (PLEG): container finished" podID="85e488a7-477c-4368-a461-725ccdc6987e" containerID="c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e" exitCode=137 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442707 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed" exitCode=0 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442733 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f" exitCode=0 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442742 5136 generic.go:334] "Generic (PLEG): container finished" podID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerID="c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319" exitCode=0 Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442741 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed"} Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442789 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f"} Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.442803 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319"} Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.553831 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.560057 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653558 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653593 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.653739 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") pod \"85e488a7-477c-4368-a461-725ccdc6987e\" (UID: \"85e488a7-477c-4368-a461-725ccdc6987e\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.700586 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd" (OuterVolumeSpecName: "kube-api-access-tgcpd") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "kube-api-access-tgcpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.782456 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tgcpd\" (UniqueName: \"kubernetes.io/projected/85e488a7-477c-4368-a461-725ccdc6987e-kube-api-access-tgcpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.803631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.832030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.870740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "85e488a7-477c-4368-a461-725ccdc6987e" (UID: "85e488a7-477c-4368-a461-725ccdc6987e"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.887725 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.887765 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.887787 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/85e488a7-477c-4368-a461-725ccdc6987e-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.940607 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.988705 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.988769 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989003 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989029 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989080 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989112 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989221 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989251 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.989322 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") pod \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\" (UID: \"9f27cf85-7e28-48aa-b93a-0647c59a7dcc\") " Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.990843 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.994303 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.995701 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out" (OuterVolumeSpecName: "config-out") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996289 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config" (OuterVolumeSpecName: "config") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45" (OuterVolumeSpecName: "kube-api-access-mdd45") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "kube-api-access-mdd45". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:30 crc kubenswrapper[5136]: I0320 08:58:30.996875 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.024156 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config" (OuterVolumeSpecName: "web-config") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.024196 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "9f27cf85-7e28-48aa-b93a-0647c59a7dcc" (UID: "9f27cf85-7e28-48aa-b93a-0647c59a7dcc"). InnerVolumeSpecName "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095085 5136 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095123 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095177 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" " Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095194 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.095212 5136 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104912 5136 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104926 5136 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104936 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mdd45\" (UniqueName: \"kubernetes.io/projected/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-kube-api-access-mdd45\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104947 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.104956 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/9f27cf85-7e28-48aa-b93a-0647c59a7dcc-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.124027 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.124373 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34") on node "crc" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.206724 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.453398 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.453403 5136 scope.go:117] "RemoveContainer" containerID="c0b4485ed3a40b504bf79337010ee963ceceae4d07ffa4c5f67bc76669810e3e" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.458155 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.461312 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"9f27cf85-7e28-48aa-b93a-0647c59a7dcc","Type":"ContainerDied","Data":"4ab763507cd2c9da05be42fe8e977f93c3eb5200fdec5c434fb058393a4e3e30"} Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.461427 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.535104 5136 scope.go:117] "RemoveContainer" containerID="900d4676bbda872999819d15b70a4cf6ea3c9fa4b3672549e4f5c0f54e67afed" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.543206 5136 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="85e488a7-477c-4368-a461-725ccdc6987e" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.543484 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.550138 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578107 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578625 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="init-config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578638 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="init-config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578699 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578709 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578739 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578746 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" Mar 20 08:58:31 crc kubenswrapper[5136]: E0320 08:58:31.578785 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.578792 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579038 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="thanos-sidecar" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579090 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="config-reloader" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579104 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" containerName="prometheus" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.579776 5136 scope.go:117] "RemoveContainer" containerID="4558c33f4de2c378dedbd4356bb40a205875151331a5b77dc10d84ca3825805f" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.581366 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.589184 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591121 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591286 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591313 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591424 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591609 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-jt99d" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.591763 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.592461 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.607461 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.612439 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.638349 5136 scope.go:117] "RemoveContainer" containerID="c4592edf336fdf2500951f3bf6021b40ab54c666d60df8a359a5c1fd53d8e319" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.685906 5136 scope.go:117] "RemoveContainer" containerID="0f81420fdec828e5dce02cec8a66a0c6cd8324009373b1662fde78097754c972" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741139 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741360 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741409 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741449 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741473 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741500 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741782 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.741952 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742691 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742750 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742784 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.742875 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844692 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844732 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844761 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844798 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844857 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844879 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844901 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844932 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.844956 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845003 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845044 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845070 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.845095 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.846659 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.847053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.847271 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.851362 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.851891 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.852054 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.852300 5136 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.852333 5136 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ad804f31e72686a367ca365b9ecb0de79de25c176ca5187be7f65bd43ec38926/globalmount\"" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.861297 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.861233 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.862520 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.863187 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.863991 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.867283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.889966 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"prometheus-metric-storage-0\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:31 crc kubenswrapper[5136]: I0320 08:58:31.957241 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.407071 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85e488a7-477c-4368-a461-725ccdc6987e" path="/var/lib/kubelet/pods/85e488a7-477c-4368-a461-725ccdc6987e/volumes" Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.408169 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f27cf85-7e28-48aa-b93a-0647c59a7dcc" path="/var/lib/kubelet/pods/9f27cf85-7e28-48aa-b93a-0647c59a7dcc/volumes" Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.439240 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 08:58:32 crc kubenswrapper[5136]: I0320 08:58:32.483471 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"441fbbe54f0f16d0c91d190250a9aea863086641a75258b3146f19278093050a"} Mar 20 08:58:33 crc kubenswrapper[5136]: I0320 08:58:33.396402 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:33 crc kubenswrapper[5136]: E0320 08:58:33.397130 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.054800 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.063493 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.072385 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-vgnkq"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.083116 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b8c9-account-create-update-8v2dt"] Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.415935 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d0704b-80fd-44fe-9007-2971cc8a6cf6" path="/var/lib/kubelet/pods/63d0704b-80fd-44fe-9007-2971cc8a6cf6/volumes" Mar 20 08:58:34 crc kubenswrapper[5136]: I0320 08:58:34.416730 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0863275-620b-4bea-a747-135c323ebb6f" path="/var/lib/kubelet/pods/f0863275-620b-4bea-a747-135c323ebb6f/volumes" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.143612 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.145833 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.148866 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.163117 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.169428 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239710 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239874 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239932 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239961 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.239994 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.240068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.341267 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342214 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342319 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342369 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342464 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342506 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342571 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.342735 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.343268 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.347394 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.347678 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.348054 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.348408 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.361583 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"ceilometer-0\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.466706 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.521754 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69"} Mar 20 08:58:35 crc kubenswrapper[5136]: I0320 08:58:35.986141 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:58:35 crc kubenswrapper[5136]: W0320 08:58:35.996124 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc92a2e9_70bb_400b_ab37_3e17b334a8de.slice/crio-e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963 WatchSource:0}: Error finding container e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963: Status 404 returned error can't find the container with id e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963 Mar 20 08:58:36 crc kubenswrapper[5136]: I0320 08:58:36.533764 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963"} Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.528326 5136 scope.go:117] "RemoveContainer" containerID="ba829091226a089834672fdb8aaa0264ffcab6218d4874fe20d15ed41e821de5" Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.574066 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.594948 5136 scope.go:117] "RemoveContainer" containerID="02bf2fddb0787ba56f7a7d4d2929f25e0b16aff46d2b34aac1bc69f87f328612" Mar 20 08:58:40 crc kubenswrapper[5136]: I0320 08:58:40.715192 5136 scope.go:117] "RemoveContainer" containerID="6d85db0ede2cb37b721e22824a2dda96a152a59cfb86afea2b68c0eedbe79e58" Mar 20 08:58:41 crc kubenswrapper[5136]: I0320 08:58:41.584921 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69" exitCode=0 Mar 20 08:58:41 crc kubenswrapper[5136]: I0320 08:58:41.585006 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69"} Mar 20 08:58:41 crc kubenswrapper[5136]: I0320 08:58:41.591278 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} Mar 20 08:58:42 crc kubenswrapper[5136]: I0320 08:58:42.601972 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b"} Mar 20 08:58:42 crc kubenswrapper[5136]: I0320 08:58:42.604512 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} Mar 20 08:58:44 crc kubenswrapper[5136]: I0320 08:58:44.643167 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerStarted","Data":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} Mar 20 08:58:44 crc kubenswrapper[5136]: I0320 08:58:44.643788 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:58:44 crc kubenswrapper[5136]: I0320 08:58:44.673365 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.9891667210000001 podStartE2EDuration="9.673339005s" podCreationTimestamp="2026-03-20 08:58:35 +0000 UTC" firstStartedPulling="2026-03-20 08:58:36.001542124 +0000 UTC m=+7748.260853265" lastFinishedPulling="2026-03-20 08:58:43.685714398 +0000 UTC m=+7755.945025549" observedRunningTime="2026-03-20 08:58:44.672130238 +0000 UTC m=+7756.931441399" watchObservedRunningTime="2026-03-20 08:58:44.673339005 +0000 UTC m=+7756.932650156" Mar 20 08:58:45 crc kubenswrapper[5136]: I0320 08:58:45.659007 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a"} Mar 20 08:58:45 crc kubenswrapper[5136]: I0320 08:58:45.659337 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerStarted","Data":"22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc"} Mar 20 08:58:45 crc kubenswrapper[5136]: I0320 08:58:45.704320 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=14.704299894 podStartE2EDuration="14.704299894s" podCreationTimestamp="2026-03-20 08:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:45.694101468 +0000 UTC m=+7757.953412629" watchObservedRunningTime="2026-03-20 08:58:45.704299894 +0000 UTC m=+7757.963611055" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.397574 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:58:46 crc kubenswrapper[5136]: E0320 08:58:46.397986 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.958351 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.959062 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:46 crc kubenswrapper[5136]: I0320 08:58:46.974377 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:47 crc kubenswrapper[5136]: I0320 08:58:47.690990 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Mar 20 08:58:48 crc kubenswrapper[5136]: I0320 08:58:48.843796 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 08:58:48 crc kubenswrapper[5136]: I0320 08:58:48.845665 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.606486 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.606543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.682667 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.697431 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.700578 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711758 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711793 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.711873 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.739648 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.762687 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"aodh-db-create-w7sqw\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.766393 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.773958 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.780025 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.813954 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.814086 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.814745 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:49 crc kubenswrapper[5136]: I0320 08:58:49.834765 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"aodh-35ea-account-create-update-6jb9f\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.055836 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.354597 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 08:58:50 crc kubenswrapper[5136]: W0320 08:58:50.712628 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9b2a0f1_96d1_4edc_a219_60194a2bf4b9.slice/crio-4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270 WatchSource:0}: Error finding container 4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270: Status 404 returned error can't find the container with id 4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270 Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.715982 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.723924 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerStarted","Data":"3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99"} Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.723977 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerStarted","Data":"552f2828cf75b55f8bdebd9a56db3e613c667737eda344a9f955eb21c4de5edf"} Mar 20 08:58:50 crc kubenswrapper[5136]: I0320 08:58:50.746673 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-w7sqw" podStartSLOduration=2.746651677 podStartE2EDuration="2.746651677s" podCreationTimestamp="2026-03-20 08:58:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 08:58:50.739342471 +0000 UTC m=+7762.998653622" watchObservedRunningTime="2026-03-20 08:58:50.746651677 +0000 UTC m=+7763.005962828" Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.734451 5136 generic.go:334] "Generic (PLEG): container finished" podID="70745a35-fe6f-4248-ac87-970763afe00e" containerID="3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99" exitCode=0 Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.734811 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerDied","Data":"3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99"} Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.736526 5136 generic.go:334] "Generic (PLEG): container finished" podID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerID="bbe438fbefca46d6264b55b57938c854859588f624d630a720b3f84f596f758f" exitCode=0 Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.736568 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-6jb9f" event={"ID":"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9","Type":"ContainerDied","Data":"bbe438fbefca46d6264b55b57938c854859588f624d630a720b3f84f596f758f"} Mar 20 08:58:51 crc kubenswrapper[5136]: I0320 08:58:51.736593 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-6jb9f" event={"ID":"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9","Type":"ContainerStarted","Data":"4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270"} Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.156962 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.160576 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.297778 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") pod \"70745a35-fe6f-4248-ac87-970763afe00e\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.297905 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") pod \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.298023 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") pod \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\" (UID: \"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.298060 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") pod \"70745a35-fe6f-4248-ac87-970763afe00e\" (UID: \"70745a35-fe6f-4248-ac87-970763afe00e\") " Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.298970 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "70745a35-fe6f-4248-ac87-970763afe00e" (UID: "70745a35-fe6f-4248-ac87-970763afe00e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.299177 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" (UID: "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.303245 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn" (OuterVolumeSpecName: "kube-api-access-bzthn") pod "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" (UID: "a9b2a0f1-96d1-4edc-a219-60194a2bf4b9"). InnerVolumeSpecName "kube-api-access-bzthn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.303790 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx" (OuterVolumeSpecName: "kube-api-access-9gzxx") pod "70745a35-fe6f-4248-ac87-970763afe00e" (UID: "70745a35-fe6f-4248-ac87-970763afe00e"). InnerVolumeSpecName "kube-api-access-9gzxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400614 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400650 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/70745a35-fe6f-4248-ac87-970763afe00e-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400660 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzxx\" (UniqueName: \"kubernetes.io/projected/70745a35-fe6f-4248-ac87-970763afe00e-kube-api-access-9gzxx\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.400670 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bzthn\" (UniqueName: \"kubernetes.io/projected/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9-kube-api-access-bzthn\") on node \"crc\" DevicePath \"\"" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.760664 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-6jb9f" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.760659 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-6jb9f" event={"ID":"a9b2a0f1-96d1-4edc-a219-60194a2bf4b9","Type":"ContainerDied","Data":"4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270"} Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.760881 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4270655960bcff0b37aaf25e5248214690bc52a37037dbd7f629795491d07270" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.763413 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-w7sqw" event={"ID":"70745a35-fe6f-4248-ac87-970763afe00e","Type":"ContainerDied","Data":"552f2828cf75b55f8bdebd9a56db3e613c667737eda344a9f955eb21c4de5edf"} Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.763476 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-w7sqw" Mar 20 08:58:53 crc kubenswrapper[5136]: I0320 08:58:53.763477 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552f2828cf75b55f8bdebd9a56db3e613c667737eda344a9f955eb21c4de5edf" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.280082 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 08:58:59 crc kubenswrapper[5136]: E0320 08:58:59.280986 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerName="mariadb-account-create-update" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281006 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerName="mariadb-account-create-update" Mar 20 08:58:59 crc kubenswrapper[5136]: E0320 08:58:59.281027 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70745a35-fe6f-4248-ac87-970763afe00e" containerName="mariadb-database-create" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281035 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="70745a35-fe6f-4248-ac87-970763afe00e" containerName="mariadb-database-create" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281252 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="70745a35-fe6f-4248-ac87-970763afe00e" containerName="mariadb-database-create" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.281276 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" containerName="mariadb-account-create-update" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.282122 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.283678 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.284529 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4k8x9" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.284577 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.284719 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.289184 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433066 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433160 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433228 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.433333 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.534894 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.534986 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.535036 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.535126 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.542604 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.542938 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.545416 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.552121 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"aodh-db-sync-wjckm\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " pod="openstack/aodh-db-sync-wjckm" Mar 20 08:58:59 crc kubenswrapper[5136]: I0320 08:58:59.603232 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:59:00 crc kubenswrapper[5136]: W0320 08:59:00.036344 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9eddce1_1338_489a_b0e9_f008c33fea0f.slice/crio-4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d WatchSource:0}: Error finding container 4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d: Status 404 returned error can't find the container with id 4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.045913 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.055172 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.065551 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-mwt5p"] Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.404099 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:00 crc kubenswrapper[5136]: E0320 08:59:00.406654 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.411711 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695202be-4633-411e-9afe-fd706e1cfbe6" path="/var/lib/kubelet/pods/695202be-4633-411e-9afe-fd706e1cfbe6/volumes" Mar 20 08:59:00 crc kubenswrapper[5136]: I0320 08:59:00.832435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerStarted","Data":"4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d"} Mar 20 08:59:05 crc kubenswrapper[5136]: I0320 08:59:05.475560 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 08:59:08 crc kubenswrapper[5136]: I0320 08:59:08.988854 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:08 crc kubenswrapper[5136]: I0320 08:59:08.989433 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" containerID="cri-o://95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" gracePeriod=30 Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.499647 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.622167 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") pod \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\" (UID: \"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac\") " Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.634119 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n" (OuterVolumeSpecName: "kube-api-access-6fc9n") pod "ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" (UID: "ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac"). InnerVolumeSpecName "kube-api-access-6fc9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:09 crc kubenswrapper[5136]: I0320 08:59:09.724337 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fc9n\" (UniqueName: \"kubernetes.io/projected/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac-kube-api-access-6fc9n\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134071 5136 generic.go:334] "Generic (PLEG): container finished" podID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" exitCode=2 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134121 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerDied","Data":"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76"} Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac","Type":"ContainerDied","Data":"6ff5903a5cc26ba52c8004bed71ec6d792235b4e0088d5c041ffe70d1e0d7e6a"} Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134165 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.134177 5136 scope.go:117] "RemoveContainer" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.159189 5136 scope.go:117] "RemoveContainer" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" Mar 20 08:59:10 crc kubenswrapper[5136]: E0320 08:59:10.159670 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76\": container with ID starting with 95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76 not found: ID does not exist" containerID="95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.159709 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76"} err="failed to get container status \"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76\": rpc error: code = NotFound desc = could not find container \"95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76\": container with ID starting with 95977db38ee1f44693cb25b914c16cc8f59c4c6bf618255fd7fb1f5a495e2a76 not found: ID does not exist" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.186971 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.203511 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.214179 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: E0320 08:59:10.214676 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.214692 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.214966 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" containerName="kube-state-metrics" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.215806 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.223267 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.249580 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.249741 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.352896 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.352976 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.353008 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.353337 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.412468 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac" path="/var/lib/kubelet/pods/ef0bec03-ab6b-4223-b1ed-f21e0a84c8ac/volumes" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455463 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455523 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455621 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.455689 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.471899 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.472074 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.475914 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.477608 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"kube-state-metrics-0\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.571457 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.954201 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.956304 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" containerID="cri-o://58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" gracePeriod=30 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.957258 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" containerID="cri-o://64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" gracePeriod=30 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.957370 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" containerID="cri-o://3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" gracePeriod=30 Mar 20 08:59:10 crc kubenswrapper[5136]: I0320 08:59:10.957481 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" containerID="cri-o://7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" gracePeriod=30 Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.068246 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.146753 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerStarted","Data":"5287b5546ce2593541455a64057c115d269b5e5f8d4df65c154feacababa85d9"} Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149440 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" exitCode=0 Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149469 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" exitCode=2 Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149519 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.149547 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.844555 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.985781 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986271 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986347 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986402 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986490 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986574 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.986613 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") pod \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\" (UID: \"dc92a2e9-70bb-400b-ab37-3e17b334a8de\") " Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.988162 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.988805 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.998845 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts" (OuterVolumeSpecName: "scripts") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:11 crc kubenswrapper[5136]: I0320 08:59:11.998921 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt" (OuterVolumeSpecName: "kube-api-access-8gxrt") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "kube-api-access-8gxrt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.025597 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.068774 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.083812 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data" (OuterVolumeSpecName: "config-data") pod "dc92a2e9-70bb-400b-ab37-3e17b334a8de" (UID: "dc92a2e9-70bb-400b-ab37-3e17b334a8de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088597 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088629 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088638 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/dc92a2e9-70bb-400b-ab37-3e17b334a8de-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088648 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gxrt\" (UniqueName: \"kubernetes.io/projected/dc92a2e9-70bb-400b-ab37-3e17b334a8de-kube-api-access-8gxrt\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088657 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088665 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.088673 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc92a2e9-70bb-400b-ab37-3e17b334a8de-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168524 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" exitCode=0 Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168563 5136 generic.go:334] "Generic (PLEG): container finished" podID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" exitCode=0 Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168594 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168627 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168658 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168672 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"dc92a2e9-70bb-400b-ab37-3e17b334a8de","Type":"ContainerDied","Data":"e06e56d6f6d6b7a4d88b163ff2bdce81afa10713d484b4f149a44013355b9963"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.168690 5136 scope.go:117] "RemoveContainer" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.172744 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerStarted","Data":"aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83"} Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.173122 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.193276 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.7664246860000001 podStartE2EDuration="2.19325369s" podCreationTimestamp="2026-03-20 08:59:10 +0000 UTC" firstStartedPulling="2026-03-20 08:59:11.086518626 +0000 UTC m=+7783.345829777" lastFinishedPulling="2026-03-20 08:59:11.51334763 +0000 UTC m=+7783.772658781" observedRunningTime="2026-03-20 08:59:12.191440924 +0000 UTC m=+7784.450752075" watchObservedRunningTime="2026-03-20 08:59:12.19325369 +0000 UTC m=+7784.452564841" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.196267 5136 scope.go:117] "RemoveContainer" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.226072 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.240417 5136 scope.go:117] "RemoveContainer" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.253125 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.286929 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.287549 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.287642 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.287707 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.287757 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.287968 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288099 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.288190 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288268 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288548 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="proxy-httpd" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288752 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-notification-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288961 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="sg-core" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.289098 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" containerName="ceilometer-central-agent" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.288792 5136 scope.go:117] "RemoveContainer" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.291540 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.293224 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.296722 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.297018 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.298563 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317101 5136 scope.go:117] "RemoveContainer" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.317431 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": container with ID starting with 64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3 not found: ID does not exist" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317459 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} err="failed to get container status \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": rpc error: code = NotFound desc = could not find container \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": container with ID starting with 64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317479 5136 scope.go:117] "RemoveContainer" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.317661 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": container with ID starting with 3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59 not found: ID does not exist" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317676 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} err="failed to get container status \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": rpc error: code = NotFound desc = could not find container \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": container with ID starting with 3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317687 5136 scope.go:117] "RemoveContainer" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.317861 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": container with ID starting with 7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192 not found: ID does not exist" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317884 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} err="failed to get container status \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": rpc error: code = NotFound desc = could not find container \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": container with ID starting with 7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.317899 5136 scope.go:117] "RemoveContainer" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: E0320 08:59:12.318067 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": container with ID starting with 58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9 not found: ID does not exist" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318088 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} err="failed to get container status \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": rpc error: code = NotFound desc = could not find container \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": container with ID starting with 58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318103 5136 scope.go:117] "RemoveContainer" containerID="64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318258 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3"} err="failed to get container status \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": rpc error: code = NotFound desc = could not find container \"64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3\": container with ID starting with 64b7b5a6e5b14213c074648fa78ce4e013f6f986c94a22df72fec3f148595ba3 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318273 5136 scope.go:117] "RemoveContainer" containerID="3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318680 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59"} err="failed to get container status \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": rpc error: code = NotFound desc = could not find container \"3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59\": container with ID starting with 3f955bb4a97077f0cb98be00458e981ed138e3134c5ba2284441c911dda3fa59 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.318707 5136 scope.go:117] "RemoveContainer" containerID="7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.319086 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192"} err="failed to get container status \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": rpc error: code = NotFound desc = could not find container \"7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192\": container with ID starting with 7bc901dd9d9e56a3c650e02a5b5fa2dd120ef1a893c42e172f3f2fc680183192 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.319100 5136 scope.go:117] "RemoveContainer" containerID="58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.319254 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9"} err="failed to get container status \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": rpc error: code = NotFound desc = could not find container \"58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9\": container with ID starting with 58662fab45a275b2c19cdb9946983decbab55dae408ae15a8e5cd9c411cba7d9 not found: ID does not exist" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397437 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397543 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397765 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.397884 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398004 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398071 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398152 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.398278 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.408023 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc92a2e9-70bb-400b-ab37-3e17b334a8de" path="/var/lib/kubelet/pods/dc92a2e9-70bb-400b-ab37-3e17b334a8de/volumes" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500663 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500728 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500802 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500916 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.500983 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501097 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501134 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501557 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.501588 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.506950 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.507265 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.507275 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.507266 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.515561 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.520261 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"ceilometer-0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " pod="openstack/ceilometer-0" Mar 20 08:59:12 crc kubenswrapper[5136]: I0320 08:59:12.617380 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.203463 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerStarted","Data":"456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62"} Mar 20 08:59:15 crc kubenswrapper[5136]: W0320 08:59:15.230338 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd4790c11_3203_4f22_958f_a67c1242beb0.slice/crio-ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded WatchSource:0}: Error finding container ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded: Status 404 returned error can't find the container with id ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.232861 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.245768 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-wjckm" podStartSLOduration=1.450293445 podStartE2EDuration="16.245747067s" podCreationTimestamp="2026-03-20 08:58:59 +0000 UTC" firstStartedPulling="2026-03-20 08:59:00.039035422 +0000 UTC m=+7772.298346573" lastFinishedPulling="2026-03-20 08:59:14.834489044 +0000 UTC m=+7787.093800195" observedRunningTime="2026-03-20 08:59:15.220084132 +0000 UTC m=+7787.479395283" watchObservedRunningTime="2026-03-20 08:59:15.245747067 +0000 UTC m=+7787.505058238" Mar 20 08:59:15 crc kubenswrapper[5136]: I0320 08:59:15.398368 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:15 crc kubenswrapper[5136]: E0320 08:59:15.398932 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:16 crc kubenswrapper[5136]: I0320 08:59:16.217347 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8"} Mar 20 08:59:16 crc kubenswrapper[5136]: I0320 08:59:16.218232 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2"} Mar 20 08:59:16 crc kubenswrapper[5136]: I0320 08:59:16.218297 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded"} Mar 20 08:59:17 crc kubenswrapper[5136]: I0320 08:59:17.227040 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903"} Mar 20 08:59:17 crc kubenswrapper[5136]: I0320 08:59:17.229953 5136 generic.go:334] "Generic (PLEG): container finished" podID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerID="456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62" exitCode=0 Mar 20 08:59:17 crc kubenswrapper[5136]: I0320 08:59:17.229983 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerDied","Data":"456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62"} Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.593085 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.727672 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.728052 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.728124 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.728217 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") pod \"f9eddce1-1338-489a-b0e9-f008c33fea0f\" (UID: \"f9eddce1-1338-489a-b0e9-f008c33fea0f\") " Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.733386 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts" (OuterVolumeSpecName: "scripts") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.747092 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8" (OuterVolumeSpecName: "kube-api-access-vvzx8") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "kube-api-access-vvzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.752830 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data" (OuterVolumeSpecName: "config-data") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.762033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f9eddce1-1338-489a-b0e9-f008c33fea0f" (UID: "f9eddce1-1338-489a-b0e9-f008c33fea0f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830398 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvzx8\" (UniqueName: \"kubernetes.io/projected/f9eddce1-1338-489a-b0e9-f008c33fea0f-kube-api-access-vvzx8\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830432 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830441 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:18 crc kubenswrapper[5136]: I0320 08:59:18.830450 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f9eddce1-1338-489a-b0e9-f008c33fea0f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.250737 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-wjckm" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.250754 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-wjckm" event={"ID":"f9eddce1-1338-489a-b0e9-f008c33fea0f","Type":"ContainerDied","Data":"4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d"} Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.251203 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b2049f62c5cb346cd5e6e89c80b8c597ae5f2242aeb69abd7feebc13a61232d" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.255850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerStarted","Data":"4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111"} Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.255994 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:59:19 crc kubenswrapper[5136]: I0320 08:59:19.292044 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.243443995 podStartE2EDuration="7.292025711s" podCreationTimestamp="2026-03-20 08:59:12 +0000 UTC" firstStartedPulling="2026-03-20 08:59:15.232585899 +0000 UTC m=+7787.491897060" lastFinishedPulling="2026-03-20 08:59:18.281167625 +0000 UTC m=+7790.540478776" observedRunningTime="2026-03-20 08:59:19.286776779 +0000 UTC m=+7791.546087940" watchObservedRunningTime="2026-03-20 08:59:19.292025711 +0000 UTC m=+7791.551336862" Mar 20 08:59:20 crc kubenswrapper[5136]: I0320 08:59:20.584644 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.257707 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:24 crc kubenswrapper[5136]: E0320 08:59:24.258570 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerName="aodh-db-sync" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.258586 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerName="aodh-db-sync" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.258886 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" containerName="aodh-db-sync" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.261724 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.266608 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4k8x9" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.266948 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.267204 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.275151 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351500 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351630 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351691 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.351716 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.453868 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.453942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.454120 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.454150 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.459132 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.460363 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.465152 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.474734 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"aodh-0\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " pod="openstack/aodh-0" Mar 20 08:59:24 crc kubenswrapper[5136]: I0320 08:59:24.591978 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 08:59:25 crc kubenswrapper[5136]: I0320 08:59:25.098590 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:25 crc kubenswrapper[5136]: I0320 08:59:25.325750 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"d8bdfeca9bd1597fa3d2bc3b892eb75e23fce5575693634908f1e50575aa3005"} Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.303317 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.303949 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" containerID="cri-o://4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.304068 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" containerID="cri-o://dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.304065 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" containerID="cri-o://64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.304796 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" containerID="cri-o://94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2" gracePeriod=30 Mar 20 08:59:26 crc kubenswrapper[5136]: I0320 08:59:26.338303 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.364579 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111" exitCode=0 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366536 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903" exitCode=2 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366605 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8" exitCode=0 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366678 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4790c11-3203-4f22-958f-a67c1242beb0" containerID="94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2" exitCode=0 Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.364633 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366879 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.366959 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.367021 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2"} Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.439598 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618098 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618121 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618288 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618505 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618351 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.618788 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.619023 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") pod \"d4790c11-3203-4f22-958f-a67c1242beb0\" (UID: \"d4790c11-3203-4f22-958f-a67c1242beb0\") " Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.619596 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.619619 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d4790c11-3203-4f22-958f-a67c1242beb0-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.622620 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts" (OuterVolumeSpecName: "scripts") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.626949 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq" (OuterVolumeSpecName: "kube-api-access-2qhqq") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "kube-api-access-2qhqq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.651805 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.708166 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722572 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722616 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722630 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qhqq\" (UniqueName: \"kubernetes.io/projected/d4790c11-3203-4f22-958f-a67c1242beb0-kube-api-access-2qhqq\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.722641 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.725993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.760096 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data" (OuterVolumeSpecName: "config-data") pod "d4790c11-3203-4f22-958f-a67c1242beb0" (UID: "d4790c11-3203-4f22-958f-a67c1242beb0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.824293 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:27 crc kubenswrapper[5136]: I0320 08:59:27.824336 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d4790c11-3203-4f22-958f-a67c1242beb0-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.283447 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.381624 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.381634 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d4790c11-3203-4f22-958f-a67c1242beb0","Type":"ContainerDied","Data":"ee0fc0df5610cea45ffcc78247a58835bb3ee685c340c2aef6349fea9aee9ded"} Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.381679 5136 scope.go:117] "RemoveContainer" containerID="4cb2f797386081467747b2d3643bb044441191902352455ca9624fcb6ce13111" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.388564 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa"} Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.406093 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.406457 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.546449 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.564006 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.573716 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574183 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574195 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574213 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574220 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574231 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574238 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: E0320 08:59:28.574253 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574259 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574455 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-central-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574467 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="sg-core" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574659 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="ceilometer-notification-agent" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.574677 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" containerName="proxy-httpd" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.583011 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.585920 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.586017 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.586100 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.591401 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.623117 5136 scope.go:117] "RemoveContainer" containerID="64db95b7d486d98993a1930605f414bffad7d70821f1b278f87f8d8dfcb81903" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.673246 5136 scope.go:117] "RemoveContainer" containerID="dfb994bb52968a72e193beb61fdb0d2ba31b06ee0870ee3288671db16099e8d8" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.694377 5136 scope.go:117] "RemoveContainer" containerID="94bbbcf0d712721420b13672832b679990ab864a6cb813e885e5d906666682e2" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748177 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748210 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748291 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748318 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748350 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748375 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748410 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.748450 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850395 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850459 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850490 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850516 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850556 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850610 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850686 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850707 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.850829 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.851332 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.854371 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.854529 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.856273 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.859760 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.868203 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.868864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"ceilometer-0\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " pod="openstack/ceilometer-0" Mar 20 08:59:28 crc kubenswrapper[5136]: I0320 08:59:28.911567 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:29 crc kubenswrapper[5136]: I0320 08:59:29.397027 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:29 crc kubenswrapper[5136]: I0320 08:59:29.436987 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.406994 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4790c11-3203-4f22-958f-a67c1242beb0" path="/var/lib/kubelet/pods/d4790c11-3203-4f22-958f-a67c1242beb0/volumes" Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446125 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerStarted","Data":"15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446268 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" containerID="cri-o://92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446687 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" containerID="cri-o://15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446739 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" containerID="cri-o://e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.446773 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" containerID="cri-o://42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa" gracePeriod=30 Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.449231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.449256 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"40f72fb3e6c0e5affb24c292b9e3430449f93e14ef08ad4d32955ae47cb5c29a"} Mar 20 08:59:30 crc kubenswrapper[5136]: I0320 08:59:30.480195 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.459672 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa" exitCode=0 Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.460221 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915" exitCode=0 Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.459767 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa"} Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.460267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915"} Mar 20 08:59:31 crc kubenswrapper[5136]: I0320 08:59:31.463230 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768"} Mar 20 08:59:32 crc kubenswrapper[5136]: I0320 08:59:32.471914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475"} Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.489963 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerStarted","Data":"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833"} Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490294 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" containerID="cri-o://229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490571 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490639 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" containerID="cri-o://58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490779 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" containerID="cri-o://335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.490890 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" containerID="cri-o://c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" gracePeriod=30 Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.517432 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.7189996130000003 podStartE2EDuration="6.517409344s" podCreationTimestamp="2026-03-20 08:59:28 +0000 UTC" firstStartedPulling="2026-03-20 08:59:29.427174068 +0000 UTC m=+7801.686485219" lastFinishedPulling="2026-03-20 08:59:33.225583799 +0000 UTC m=+7805.484894950" observedRunningTime="2026-03-20 08:59:34.511678086 +0000 UTC m=+7806.770989247" watchObservedRunningTime="2026-03-20 08:59:34.517409344 +0000 UTC m=+7806.776720505" Mar 20 08:59:34 crc kubenswrapper[5136]: I0320 08:59:34.527793 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=5.5670325080000005 podStartE2EDuration="10.527768034s" podCreationTimestamp="2026-03-20 08:59:24 +0000 UTC" firstStartedPulling="2026-03-20 08:59:25.09395475 +0000 UTC m=+7797.353265901" lastFinishedPulling="2026-03-20 08:59:30.054690276 +0000 UTC m=+7802.314001427" observedRunningTime="2026-03-20 08:59:30.49567888 +0000 UTC m=+7802.754990021" watchObservedRunningTime="2026-03-20 08:59:34.527768034 +0000 UTC m=+7806.787079195" Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502370 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" exitCode=0 Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502701 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" exitCode=2 Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502712 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" exitCode=0 Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502416 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833"} Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502748 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475"} Mar 20 08:59:35 crc kubenswrapper[5136]: I0320 08:59:35.502762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768"} Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.166922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327364 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327434 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327481 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327578 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327632 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.327667 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328164 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328374 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328904 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.328993 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") pod \"a5ddc272-9064-4e30-ba27-01b92989b459\" (UID: \"a5ddc272-9064-4e30-ba27-01b92989b459\") " Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.330298 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.330330 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5ddc272-9064-4e30-ba27-01b92989b459-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.333314 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts" (OuterVolumeSpecName: "scripts") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.336007 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg" (OuterVolumeSpecName: "kube-api-access-f2srg") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "kube-api-access-f2srg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.356040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.390940 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.428857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432143 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432215 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2srg\" (UniqueName: \"kubernetes.io/projected/a5ddc272-9064-4e30-ba27-01b92989b459-kube-api-access-f2srg\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432235 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432246 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.432287 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.441966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data" (OuterVolumeSpecName: "config-data") pod "a5ddc272-9064-4e30-ba27-01b92989b459" (UID: "a5ddc272-9064-4e30-ba27-01b92989b459"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.512982 5136 generic.go:334] "Generic (PLEG): container finished" podID="a5ddc272-9064-4e30-ba27-01b92989b459" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" exitCode=0 Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513037 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0"} Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513065 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513092 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5ddc272-9064-4e30-ba27-01b92989b459","Type":"ContainerDied","Data":"40f72fb3e6c0e5affb24c292b9e3430449f93e14ef08ad4d32955ae47cb5c29a"} Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.513115 5136 scope.go:117] "RemoveContainer" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.534387 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5ddc272-9064-4e30-ba27-01b92989b459-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.545649 5136 scope.go:117] "RemoveContainer" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.555772 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.567390 5136 scope.go:117] "RemoveContainer" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.583421 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.602871 5136 scope.go:117] "RemoveContainer" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.603170 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.603655 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.603728 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.603846 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.603928 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.603990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604041 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.604112 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604171 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604401 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="proxy-httpd" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604474 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-central-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604531 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="ceilometer-notification-agent" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.604613 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" containerName="sg-core" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.606655 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.609077 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.609499 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.609753 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.618711 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.638747 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.638868 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639026 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639058 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639130 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639162 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639244 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.639345 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.659520 5136 scope.go:117] "RemoveContainer" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.661019 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833\": container with ID starting with 58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833 not found: ID does not exist" containerID="58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.661169 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833"} err="failed to get container status \"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833\": rpc error: code = NotFound desc = could not find container \"58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833\": container with ID starting with 58219639c56c354edcb3d456f8a0695240b041537f19bac312faaaf530f1d833 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.661285 5136 scope.go:117] "RemoveContainer" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.663310 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475\": container with ID starting with c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475 not found: ID does not exist" containerID="c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.663413 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475"} err="failed to get container status \"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475\": rpc error: code = NotFound desc = could not find container \"c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475\": container with ID starting with c8e154d7ff08c3f1f41112b4757e881ca999f907d35b6404b526712d35258475 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.663504 5136 scope.go:117] "RemoveContainer" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.664653 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768\": container with ID starting with 335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768 not found: ID does not exist" containerID="335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.664705 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768"} err="failed to get container status \"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768\": rpc error: code = NotFound desc = could not find container \"335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768\": container with ID starting with 335d2c6de4b56db30d29b2c07294a71c35bb6dbf0a8c30f1c4534a2ae9f8c768 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.664735 5136 scope.go:117] "RemoveContainer" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" Mar 20 08:59:36 crc kubenswrapper[5136]: E0320 08:59:36.665190 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0\": container with ID starting with 229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0 not found: ID does not exist" containerID="229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.665225 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0"} err="failed to get container status \"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0\": rpc error: code = NotFound desc = could not find container \"229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0\": container with ID starting with 229d174f8cce06c18c633b67a2c0beb98cae390b874747fba2acbb9d39bda1b0 not found: ID does not exist" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.740990 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741038 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741091 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741117 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741165 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741211 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741296 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.741342 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.742600 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.742773 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.745776 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.745859 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.746126 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.746229 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.749502 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.758453 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"ceilometer-0\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " pod="openstack/ceilometer-0" Mar 20 08:59:36 crc kubenswrapper[5136]: I0320 08:59:36.933474 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.370738 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.373582 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.385950 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:37 crc kubenswrapper[5136]: W0320 08:59:37.440068 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7dbff142_083b_40b7_a0d7_3f17fa9810e3.slice/crio-5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09 WatchSource:0}: Error finding container 5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09: Status 404 returned error can't find the container with id 5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09 Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.442320 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.534873 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09"} Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.561479 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.561582 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.561976 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.664827 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665052 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665097 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665410 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.665643 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.682167 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"certified-operators-c75wp\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:37 crc kubenswrapper[5136]: I0320 08:59:37.703148 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.205343 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:38 crc kubenswrapper[5136]: W0320 08:59:38.221337 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f36e33_74ba_42e9_82e7_229e00db3895.slice/crio-2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f WatchSource:0}: Error finding container 2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f: Status 404 returned error can't find the container with id 2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.418089 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5ddc272-9064-4e30-ba27-01b92989b459" path="/var/lib/kubelet/pods/a5ddc272-9064-4e30-ba27-01b92989b459/volumes" Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.544285 5136 generic.go:334] "Generic (PLEG): container finished" podID="87f36e33-74ba-42e9-82e7-229e00db3895" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" exitCode=0 Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.544358 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3"} Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.544390 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerStarted","Data":"2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f"} Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.547956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9"} Mar 20 08:59:38 crc kubenswrapper[5136]: I0320 08:59:38.548019 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036"} Mar 20 08:59:39 crc kubenswrapper[5136]: I0320 08:59:39.559378 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerStarted","Data":"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe"} Mar 20 08:59:39 crc kubenswrapper[5136]: I0320 08:59:39.561894 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d"} Mar 20 08:59:40 crc kubenswrapper[5136]: I0320 08:59:40.401056 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:40 crc kubenswrapper[5136]: E0320 08:59:40.402002 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.117460 5136 scope.go:117] "RemoveContainer" containerID="997cb1e45862f8242bc1ed8efcbcfbaf8f8a8c5b47fc4938e5bd8fa980e12e82" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.165874 5136 scope.go:117] "RemoveContainer" containerID="429dc8682418b5dd369adad65e15254f5660e5ac47e728d920acf519227996a5" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.192190 5136 scope.go:117] "RemoveContainer" containerID="f82be34aae682e8c29705602a5f53ed7c575b686a407aea8e3a4986b123ff8de" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.218081 5136 scope.go:117] "RemoveContainer" containerID="44fe55ac22ffb29c918ccdbeaa595a57a885f7bfeba75321b48d4b09c0926e19" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.592850 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerStarted","Data":"65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb"} Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.593108 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.598577 5136 generic.go:334] "Generic (PLEG): container finished" podID="87f36e33-74ba-42e9-82e7-229e00db3895" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" exitCode=0 Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.598635 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe"} Mar 20 08:59:41 crc kubenswrapper[5136]: I0320 08:59:41.619600 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.216940152 podStartE2EDuration="5.619578939s" podCreationTimestamp="2026-03-20 08:59:36 +0000 UTC" firstStartedPulling="2026-03-20 08:59:37.442193836 +0000 UTC m=+7809.701504987" lastFinishedPulling="2026-03-20 08:59:40.844832613 +0000 UTC m=+7813.104143774" observedRunningTime="2026-03-20 08:59:41.614986587 +0000 UTC m=+7813.874297768" watchObservedRunningTime="2026-03-20 08:59:41.619578939 +0000 UTC m=+7813.878890090" Mar 20 08:59:42 crc kubenswrapper[5136]: I0320 08:59:42.623324 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerStarted","Data":"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547"} Mar 20 08:59:42 crc kubenswrapper[5136]: I0320 08:59:42.647952 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-c75wp" podStartSLOduration=2.1507553440000002 podStartE2EDuration="5.647927897s" podCreationTimestamp="2026-03-20 08:59:37 +0000 UTC" firstStartedPulling="2026-03-20 08:59:38.547407624 +0000 UTC m=+7810.806718775" lastFinishedPulling="2026-03-20 08:59:42.044580177 +0000 UTC m=+7814.303891328" observedRunningTime="2026-03-20 08:59:42.645014007 +0000 UTC m=+7814.904325198" watchObservedRunningTime="2026-03-20 08:59:42.647927897 +0000 UTC m=+7814.907239048" Mar 20 08:59:47 crc kubenswrapper[5136]: I0320 08:59:47.702883 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:47 crc kubenswrapper[5136]: I0320 08:59:47.703200 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:47 crc kubenswrapper[5136]: I0320 08:59:47.755657 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:48 crc kubenswrapper[5136]: I0320 08:59:48.730235 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:48 crc kubenswrapper[5136]: I0320 08:59:48.788233 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:50 crc kubenswrapper[5136]: I0320 08:59:50.708102 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-c75wp" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" containerID="cri-o://1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" gracePeriod=2 Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.205391 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.265053 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") pod \"87f36e33-74ba-42e9-82e7-229e00db3895\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.265132 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") pod \"87f36e33-74ba-42e9-82e7-229e00db3895\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.266196 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities" (OuterVolumeSpecName: "utilities") pod "87f36e33-74ba-42e9-82e7-229e00db3895" (UID: "87f36e33-74ba-42e9-82e7-229e00db3895"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.271627 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz" (OuterVolumeSpecName: "kube-api-access-t6tcz") pod "87f36e33-74ba-42e9-82e7-229e00db3895" (UID: "87f36e33-74ba-42e9-82e7-229e00db3895"). InnerVolumeSpecName "kube-api-access-t6tcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.366998 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") pod \"87f36e33-74ba-42e9-82e7-229e00db3895\" (UID: \"87f36e33-74ba-42e9-82e7-229e00db3895\") " Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.367405 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.367428 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6tcz\" (UniqueName: \"kubernetes.io/projected/87f36e33-74ba-42e9-82e7-229e00db3895-kube-api-access-t6tcz\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.423495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f36e33-74ba-42e9-82e7-229e00db3895" (UID: "87f36e33-74ba-42e9-82e7-229e00db3895"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.469503 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f36e33-74ba-42e9-82e7-229e00db3895-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717093 5136 generic.go:334] "Generic (PLEG): container finished" podID="87f36e33-74ba-42e9-82e7-229e00db3895" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" exitCode=0 Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717164 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-c75wp" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717169 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547"} Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717549 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-c75wp" event={"ID":"87f36e33-74ba-42e9-82e7-229e00db3895","Type":"ContainerDied","Data":"2810ab930e57f507f42644732b5541dac7657e6fae6a56d60c682ec4715b179f"} Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.717567 5136 scope.go:117] "RemoveContainer" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.753651 5136 scope.go:117] "RemoveContainer" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.758074 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.771255 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-c75wp"] Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.797884 5136 scope.go:117] "RemoveContainer" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.840783 5136 scope.go:117] "RemoveContainer" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" Mar 20 08:59:51 crc kubenswrapper[5136]: E0320 08:59:51.841857 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547\": container with ID starting with 1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547 not found: ID does not exist" containerID="1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.841920 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547"} err="failed to get container status \"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547\": rpc error: code = NotFound desc = could not find container \"1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547\": container with ID starting with 1c9496e48575b7096161d534027b4f4f5027e5a2513c3c45e1e17a643e2d0547 not found: ID does not exist" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.841954 5136 scope.go:117] "RemoveContainer" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" Mar 20 08:59:51 crc kubenswrapper[5136]: E0320 08:59:51.842318 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe\": container with ID starting with 290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe not found: ID does not exist" containerID="290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.842342 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe"} err="failed to get container status \"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe\": rpc error: code = NotFound desc = could not find container \"290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe\": container with ID starting with 290bb66209b0236e6b39348ed4358049c6230a50c466c14eeec8be13fa21a4fe not found: ID does not exist" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.842360 5136 scope.go:117] "RemoveContainer" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" Mar 20 08:59:51 crc kubenswrapper[5136]: E0320 08:59:51.842778 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3\": container with ID starting with e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3 not found: ID does not exist" containerID="e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3" Mar 20 08:59:51 crc kubenswrapper[5136]: I0320 08:59:51.842855 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3"} err="failed to get container status \"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3\": rpc error: code = NotFound desc = could not find container \"e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3\": container with ID starting with e15c2ed57fd06f3e4c968e87c666df2b037c989af254b85bf1d37b8922f813d3 not found: ID does not exist" Mar 20 08:59:52 crc kubenswrapper[5136]: I0320 08:59:52.407240 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" path="/var/lib/kubelet/pods/87f36e33-74ba-42e9-82e7-229e00db3895/volumes" Mar 20 08:59:53 crc kubenswrapper[5136]: I0320 08:59:53.396925 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 08:59:53 crc kubenswrapper[5136]: E0320 08:59:53.397471 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.173215 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:00:00 crc kubenswrapper[5136]: E0320 09:00:00.174641 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-content" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174661 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-content" Mar 20 09:00:00 crc kubenswrapper[5136]: E0320 09:00:00.174695 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-utilities" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174705 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="extract-utilities" Mar 20 09:00:00 crc kubenswrapper[5136]: E0320 09:00:00.174722 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174729 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.174993 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f36e33-74ba-42e9-82e7-229e00db3895" containerName="registry-server" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.175774 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.179157 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.179415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.180707 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.187742 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc"] Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.189009 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.193737 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.194294 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.201824 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.236901 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc"] Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.287847 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"auto-csr-approver-29566620-sh7c8\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.288192 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.288282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.288377 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.389892 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.390034 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.390306 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"auto-csr-approver-29566620-sh7c8\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.390366 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.391146 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.402915 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.406387 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"auto-csr-approver-29566620-sh7c8\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.418934 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"collect-profiles-29566620-rtpfc\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.510066 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.524749 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828019 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3" exitCode=137 Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828284 5136 generic.go:334] "Generic (PLEG): container finished" podID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerID="e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc" exitCode=137 Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828305 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3"} Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.828328 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc"} Mar 20 09:00:00 crc kubenswrapper[5136]: I0320 09:00:00.927456 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.003884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.004010 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.004159 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.004191 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") pod \"b2e63488-a737-4c5d-8ec1-12df36065d97\" (UID: \"b2e63488-a737-4c5d-8ec1-12df36065d97\") " Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.010789 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b" (OuterVolumeSpecName: "kube-api-access-lpx8b") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "kube-api-access-lpx8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.011953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts" (OuterVolumeSpecName: "scripts") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.059879 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.106504 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpx8b\" (UniqueName: \"kubernetes.io/projected/b2e63488-a737-4c5d-8ec1-12df36065d97-kube-api-access-lpx8b\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.106810 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.139968 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.176329 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.185615 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.188160 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data" (OuterVolumeSpecName: "config-data") pod "b2e63488-a737-4c5d-8ec1-12df36065d97" (UID: "b2e63488-a737-4c5d-8ec1-12df36065d97"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.208483 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.208514 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2e63488-a737-4c5d-8ec1-12df36065d97-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.840421 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerStarted","Data":"6d284284512c2c1cff2dd9339264f2f06fd3e69ba0decda4bba7cddddc09cd1e"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.843851 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"b2e63488-a737-4c5d-8ec1-12df36065d97","Type":"ContainerDied","Data":"d8bdfeca9bd1597fa3d2bc3b892eb75e23fce5575693634908f1e50575aa3005"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.843888 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.843917 5136 scope.go:117] "RemoveContainer" containerID="15e06db9315f6411797d0f9aa67f1b07e5684d9702a9fc4442dbbc0814ef18e3" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.845976 5136 generic.go:334] "Generic (PLEG): container finished" podID="927cb714-a185-49ad-a263-0d750b85ca34" containerID="cef340038d5981403f73c3d33f53d35230ad92fb789bb9c08b085b98f0a9ee81" exitCode=0 Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.846037 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" event={"ID":"927cb714-a185-49ad-a263-0d750b85ca34","Type":"ContainerDied","Data":"cef340038d5981403f73c3d33f53d35230ad92fb789bb9c08b085b98f0a9ee81"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.846074 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" event={"ID":"927cb714-a185-49ad-a263-0d750b85ca34","Type":"ContainerStarted","Data":"22e95ea38d66d8161e5b6e963b72527b10914b75e6e8d6825acfbe4f57546296"} Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.864580 5136 scope.go:117] "RemoveContainer" containerID="e4d81d5cd609468f5967185a1aceb869c881231de4dfdcd090b8f09483f137cc" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.887756 5136 scope.go:117] "RemoveContainer" containerID="42609fe8611cecf81531dc42f446b934dedc3a17199479496c8429f2b36967fa" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.890774 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.902424 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.911638 5136 scope.go:117] "RemoveContainer" containerID="92ea8495e9dfbb89165f77881cd9b84fab88074bc3bb11952d375d651c22c915" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.921530 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922047 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922068 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922086 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922093 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922111 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922122 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" Mar 20 09:00:01 crc kubenswrapper[5136]: E0320 09:00:01.922131 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922137 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922312 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-evaluator" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922331 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-api" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922340 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-notifier" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.922350 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" containerName="aodh-listener" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.924376 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929305 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929532 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.929889 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.932271 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-4k8x9" Mar 20 09:00:01 crc kubenswrapper[5136]: I0320 09:00:01.948501 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.024914 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025121 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025282 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025372 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025413 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.025447 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127411 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127474 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127507 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.127651 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.128293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.128381 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.131728 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.132537 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.133388 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.135426 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.135924 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.150902 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"aodh-0\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.242233 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.416438 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2e63488-a737-4c5d-8ec1-12df36065d97" path="/var/lib/kubelet/pods/b2e63488-a737-4c5d-8ec1-12df36065d97/volumes" Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.693116 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Mar 20 09:00:02 crc kubenswrapper[5136]: I0320 09:00:02.856298 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"c5f3a5b62a724af9b3292dfbea60cc84cb5ca65e111a8f8018f79664063a08d4"} Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.227190 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.359873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") pod \"927cb714-a185-49ad-a263-0d750b85ca34\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.359952 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") pod \"927cb714-a185-49ad-a263-0d750b85ca34\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.360037 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") pod \"927cb714-a185-49ad-a263-0d750b85ca34\" (UID: \"927cb714-a185-49ad-a263-0d750b85ca34\") " Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.360870 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume" (OuterVolumeSpecName: "config-volume") pod "927cb714-a185-49ad-a263-0d750b85ca34" (UID: "927cb714-a185-49ad-a263-0d750b85ca34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.365606 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf" (OuterVolumeSpecName: "kube-api-access-8kwcf") pod "927cb714-a185-49ad-a263-0d750b85ca34" (UID: "927cb714-a185-49ad-a263-0d750b85ca34"). InnerVolumeSpecName "kube-api-access-8kwcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.366475 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "927cb714-a185-49ad-a263-0d750b85ca34" (UID: "927cb714-a185-49ad-a263-0d750b85ca34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.462572 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/927cb714-a185-49ad-a263-0d750b85ca34-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.462610 5136 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/927cb714-a185-49ad-a263-0d750b85ca34-secret-volume\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.462622 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kwcf\" (UniqueName: \"kubernetes.io/projected/927cb714-a185-49ad-a263-0d750b85ca34-kube-api-access-8kwcf\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.866317 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" event={"ID":"927cb714-a185-49ad-a263-0d750b85ca34","Type":"ContainerDied","Data":"22e95ea38d66d8161e5b6e963b72527b10914b75e6e8d6825acfbe4f57546296"} Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.866928 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22e95ea38d66d8161e5b6e963b72527b10914b75e6e8d6825acfbe4f57546296" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.866594 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29566620-rtpfc" Mar 20 09:00:03 crc kubenswrapper[5136]: I0320 09:00:03.868253 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d"} Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.299529 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.308014 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29566575-8lvms"] Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.412986 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02161682-1526-46e0-aaa6-d09c6758943c" path="/var/lib/kubelet/pods/02161682-1526-46e0-aaa6-d09c6758943c/volumes" Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.880470 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d"} Mar 20 09:00:04 crc kubenswrapper[5136]: I0320 09:00:04.880704 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4"} Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.893490 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerStarted","Data":"7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2"} Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.895329 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerStarted","Data":"d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a"} Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.918614 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.951500733 podStartE2EDuration="4.918590414s" podCreationTimestamp="2026-03-20 09:00:01 +0000 UTC" firstStartedPulling="2026-03-20 09:00:02.698040585 +0000 UTC m=+7834.957351726" lastFinishedPulling="2026-03-20 09:00:04.665130256 +0000 UTC m=+7836.924441407" observedRunningTime="2026-03-20 09:00:05.917016235 +0000 UTC m=+7838.176327386" watchObservedRunningTime="2026-03-20 09:00:05.918590414 +0000 UTC m=+7838.177901565" Mar 20 09:00:05 crc kubenswrapper[5136]: I0320 09:00:05.933297 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" podStartSLOduration=1.986246926 podStartE2EDuration="5.933281138s" podCreationTimestamp="2026-03-20 09:00:00 +0000 UTC" firstStartedPulling="2026-03-20 09:00:01.185254358 +0000 UTC m=+7833.444565509" lastFinishedPulling="2026-03-20 09:00:05.13228857 +0000 UTC m=+7837.391599721" observedRunningTime="2026-03-20 09:00:05.930145052 +0000 UTC m=+7838.189456203" watchObservedRunningTime="2026-03-20 09:00:05.933281138 +0000 UTC m=+7838.192592289" Mar 20 09:00:06 crc kubenswrapper[5136]: I0320 09:00:06.908405 5136 generic.go:334] "Generic (PLEG): container finished" podID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerID="d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a" exitCode=0 Mar 20 09:00:06 crc kubenswrapper[5136]: I0320 09:00:06.908494 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerDied","Data":"d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a"} Mar 20 09:00:06 crc kubenswrapper[5136]: I0320 09:00:06.953288 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.043631 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.054033 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.065064 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-tr2s5"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.073376 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6249-account-create-update-mrh6x"] Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.252349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.368454 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") pod \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\" (UID: \"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91\") " Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.376122 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk" (OuterVolumeSpecName: "kube-api-access-mw2vk") pod "fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" (UID: "fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91"). InnerVolumeSpecName "kube-api-access-mw2vk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.405087 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 09:00:08 crc kubenswrapper[5136]: E0320 09:00:08.405740 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.411164 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="570ecd59-555d-4f55-aed1-6fe547da30b1" path="/var/lib/kubelet/pods/570ecd59-555d-4f55-aed1-6fe547da30b1/volumes" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.411969 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd07221a-a5f4-4a47-a7bf-354b0d432b27" path="/var/lib/kubelet/pods/fd07221a-a5f4-4a47-a7bf-354b0d432b27/volumes" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.471215 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mw2vk\" (UniqueName: \"kubernetes.io/projected/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91-kube-api-access-mw2vk\") on node \"crc\" DevicePath \"\"" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.929445 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" event={"ID":"fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91","Type":"ContainerDied","Data":"6d284284512c2c1cff2dd9339264f2f06fd3e69ba0decda4bba7cddddc09cd1e"} Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.929467 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566620-sh7c8" Mar 20 09:00:08 crc kubenswrapper[5136]: I0320 09:00:08.929487 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d284284512c2c1cff2dd9339264f2f06fd3e69ba0decda4bba7cddddc09cd1e" Mar 20 09:00:09 crc kubenswrapper[5136]: I0320 09:00:09.307615 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 09:00:09 crc kubenswrapper[5136]: I0320 09:00:09.317589 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566614-wvfxr"] Mar 20 09:00:10 crc kubenswrapper[5136]: I0320 09:00:10.406772 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="746f2ae5-dabf-431a-b344-011a75049862" path="/var/lib/kubelet/pods/746f2ae5-dabf-431a-b344-011a75049862/volumes" Mar 20 09:00:20 crc kubenswrapper[5136]: I0320 09:00:20.397849 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 09:00:21 crc kubenswrapper[5136]: I0320 09:00:21.066177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b"} Mar 20 09:00:32 crc kubenswrapper[5136]: I0320 09:00:32.029207 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 09:00:32 crc kubenswrapper[5136]: I0320 09:00:32.040759 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-dlmp5"] Mar 20 09:00:32 crc kubenswrapper[5136]: I0320 09:00:32.408488 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0757343-a168-444b-ab9f-eb32dc3e416a" path="/var/lib/kubelet/pods/d0757343-a168-444b-ab9f-eb32dc3e416a/volumes" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.421116 5136 scope.go:117] "RemoveContainer" containerID="39d097d4e3a8458b775ea906bb0dd550fdd83b3369518a3cd12d9c26c24a8a02" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.467995 5136 scope.go:117] "RemoveContainer" containerID="d7c966f182c94b6eabaca701ac9e2f115b1d66510a14ffb108fa112317b9c2d8" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.537903 5136 scope.go:117] "RemoveContainer" containerID="ea4b393a20ea1ece97f36d015f4602f5e94b839ad564485cfca078d956bd0138" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.629710 5136 scope.go:117] "RemoveContainer" containerID="62b91ae766226b0da7fe114136196e5dea194bad90be0b48d6f9d8c6e4102b25" Mar 20 09:00:41 crc kubenswrapper[5136]: I0320 09:00:41.648446 5136 scope.go:117] "RemoveContainer" containerID="b8341630a66939232813fa3ca2eab063f076fc3ac4ee1803ba6693cd8bb7a98d" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.152140 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:01:00 crc kubenswrapper[5136]: E0320 09:01:00.159955 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="927cb714-a185-49ad-a263-0d750b85ca34" containerName="collect-profiles" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.159983 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="927cb714-a185-49ad-a263-0d750b85ca34" containerName="collect-profiles" Mar 20 09:01:00 crc kubenswrapper[5136]: E0320 09:01:00.160014 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerName="oc" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.160021 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerName="oc" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.160252 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" containerName="oc" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.160274 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="927cb714-a185-49ad-a263-0d750b85ca34" containerName="collect-profiles" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.161075 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.169889 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262752 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262833 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.262946 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365460 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365576 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365657 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.365698 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.376864 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.383829 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.388568 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.397430 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"keystone-cron-29566621-n7g7j\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.496784 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:00 crc kubenswrapper[5136]: I0320 09:01:00.974070 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:01:01 crc kubenswrapper[5136]: I0320 09:01:01.426171 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerStarted","Data":"e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d"} Mar 20 09:01:01 crc kubenswrapper[5136]: I0320 09:01:01.426516 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerStarted","Data":"94a7dd9b06f4da0a623eb78e6e371bb29bfd8600d2b285db8fb2d4c3213a82d4"} Mar 20 09:01:01 crc kubenswrapper[5136]: I0320 09:01:01.452839 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29566621-n7g7j" podStartSLOduration=1.452805858 podStartE2EDuration="1.452805858s" podCreationTimestamp="2026-03-20 09:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-20 09:01:01.445479992 +0000 UTC m=+7893.704791153" watchObservedRunningTime="2026-03-20 09:01:01.452805858 +0000 UTC m=+7893.712116999" Mar 20 09:01:03 crc kubenswrapper[5136]: I0320 09:01:03.056183 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 09:01:03 crc kubenswrapper[5136]: I0320 09:01:03.068280 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-85wqc"] Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.043038 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.053142 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-e4e3-account-create-update-htnkq"] Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.408963 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2204982c-c8aa-4b18-a455-71915264f644" path="/var/lib/kubelet/pods/2204982c-c8aa-4b18-a455-71915264f644/volumes" Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.409491 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b48a8f95-9236-458f-a8ab-fb15f6878172" path="/var/lib/kubelet/pods/b48a8f95-9236-458f-a8ab-fb15f6878172/volumes" Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.453680 5136 generic.go:334] "Generic (PLEG): container finished" podID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerID="e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d" exitCode=0 Mar 20 09:01:04 crc kubenswrapper[5136]: I0320 09:01:04.453719 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerDied","Data":"e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d"} Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.842129 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.978826 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.978876 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.978959 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.979072 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") pod \"8494da27-4688-4c23-b4bd-77a8cac9ae31\" (UID: \"8494da27-4688-4c23-b4bd-77a8cac9ae31\") " Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.986065 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:01:05 crc kubenswrapper[5136]: I0320 09:01:05.987201 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t" (OuterVolumeSpecName: "kube-api-access-hx99t") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "kube-api-access-hx99t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.008111 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.029672 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data" (OuterVolumeSpecName: "config-data") pod "8494da27-4688-4c23-b4bd-77a8cac9ae31" (UID: "8494da27-4688-4c23-b4bd-77a8cac9ae31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082021 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hx99t\" (UniqueName: \"kubernetes.io/projected/8494da27-4688-4c23-b4bd-77a8cac9ae31-kube-api-access-hx99t\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082060 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082070 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.082078 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8494da27-4688-4c23-b4bd-77a8cac9ae31-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.472267 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29566621-n7g7j" event={"ID":"8494da27-4688-4c23-b4bd-77a8cac9ae31","Type":"ContainerDied","Data":"94a7dd9b06f4da0a623eb78e6e371bb29bfd8600d2b285db8fb2d4c3213a82d4"} Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.472314 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94a7dd9b06f4da0a623eb78e6e371bb29bfd8600d2b285db8fb2d4c3213a82d4" Mar 20 09:01:06 crc kubenswrapper[5136]: I0320 09:01:06.472386 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29566621-n7g7j" Mar 20 09:01:14 crc kubenswrapper[5136]: I0320 09:01:14.029684 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 09:01:14 crc kubenswrapper[5136]: I0320 09:01:14.038837 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-2d5zx"] Mar 20 09:01:14 crc kubenswrapper[5136]: I0320 09:01:14.424050 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2341fa-02fc-4b08-a2a4-2272078db5d9" path="/var/lib/kubelet/pods/6a2341fa-02fc-4b08-a2a4-2272078db5d9/volumes" Mar 20 09:01:41 crc kubenswrapper[5136]: I0320 09:01:41.847133 5136 scope.go:117] "RemoveContainer" containerID="ab11099fc5fbb9ac6e0c34feae9a40a9addc504e685cccc5bc8ac39fbfb3793c" Mar 20 09:01:41 crc kubenswrapper[5136]: I0320 09:01:41.881238 5136 scope.go:117] "RemoveContainer" containerID="f2ac24f272d6a9df55f1b17c9f403e8fc1875096d56818de2641768b249208a8" Mar 20 09:01:41 crc kubenswrapper[5136]: I0320 09:01:41.927798 5136 scope.go:117] "RemoveContainer" containerID="97846ae11696236889350b3c9161e329e32ca1f71469f4c6bc5cd1b32b64434b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.184115 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:02:00 crc kubenswrapper[5136]: E0320 09:02:00.186075 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerName="keystone-cron" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.186093 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerName="keystone-cron" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.186403 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" containerName="keystone-cron" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.187375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.193804 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.194033 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.194061 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.221687 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.306918 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"auto-csr-approver-29566622-tbs2b\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.409204 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"auto-csr-approver-29566622-tbs2b\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.429869 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"auto-csr-approver-29566622-tbs2b\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.524404 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:00 crc kubenswrapper[5136]: I0320 09:02:00.989693 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:02:02 crc kubenswrapper[5136]: I0320 09:02:02.047185 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" event={"ID":"eeb9dd63-3112-441b-961e-b61a752527d8","Type":"ContainerStarted","Data":"d159fe0e121c1ae70e920c78ea84976d7cd9710b606eec4bbf2fef223e223281"} Mar 20 09:02:03 crc kubenswrapper[5136]: I0320 09:02:03.058999 5136 generic.go:334] "Generic (PLEG): container finished" podID="eeb9dd63-3112-441b-961e-b61a752527d8" containerID="d110e85766974db9b00f23e4ec0b43a5d95e3bc9caa9f95ded6497351baab885" exitCode=0 Mar 20 09:02:03 crc kubenswrapper[5136]: I0320 09:02:03.059151 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" event={"ID":"eeb9dd63-3112-441b-961e-b61a752527d8","Type":"ContainerDied","Data":"d110e85766974db9b00f23e4ec0b43a5d95e3bc9caa9f95ded6497351baab885"} Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.425400 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.595008 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") pod \"eeb9dd63-3112-441b-961e-b61a752527d8\" (UID: \"eeb9dd63-3112-441b-961e-b61a752527d8\") " Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.602228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn" (OuterVolumeSpecName: "kube-api-access-zqxfn") pod "eeb9dd63-3112-441b-961e-b61a752527d8" (UID: "eeb9dd63-3112-441b-961e-b61a752527d8"). InnerVolumeSpecName "kube-api-access-zqxfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:04 crc kubenswrapper[5136]: I0320 09:02:04.697758 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqxfn\" (UniqueName: \"kubernetes.io/projected/eeb9dd63-3112-441b-961e-b61a752527d8-kube-api-access-zqxfn\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.077861 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" event={"ID":"eeb9dd63-3112-441b-961e-b61a752527d8","Type":"ContainerDied","Data":"d159fe0e121c1ae70e920c78ea84976d7cd9710b606eec4bbf2fef223e223281"} Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.077908 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d159fe0e121c1ae70e920c78ea84976d7cd9710b606eec4bbf2fef223e223281" Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.077951 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566622-tbs2b" Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.508791 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 09:02:05 crc kubenswrapper[5136]: I0320 09:02:05.520995 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566616-x4rw6"] Mar 20 09:02:06 crc kubenswrapper[5136]: I0320 09:02:06.408058 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c085dee-ef7e-47eb-93aa-6ecf4d45030c" path="/var/lib/kubelet/pods/0c085dee-ef7e-47eb-93aa-6ecf4d45030c/volumes" Mar 20 09:02:12 crc kubenswrapper[5136]: I0320 09:02:12.052392 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 09:02:12 crc kubenswrapper[5136]: I0320 09:02:12.069959 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-plxtl"] Mar 20 09:02:12 crc kubenswrapper[5136]: I0320 09:02:12.408360 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901ef065-f425-4ab7-b726-7d98704a58f8" path="/var/lib/kubelet/pods/901ef065-f425-4ab7-b726-7d98704a58f8/volumes" Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.055202 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.071763 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.080193 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.088503 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-hkzk7"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.096588 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.105693 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.114079 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-m289f"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.122001 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-6rchx"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.130146 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-42278"] Mar 20 09:02:13 crc kubenswrapper[5136]: I0320 09:02:13.139598 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-bp9vg"] Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.410913 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60ddf395-2544-4ebe-b1e2-37321af6438e" path="/var/lib/kubelet/pods/60ddf395-2544-4ebe-b1e2-37321af6438e/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.411972 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d18b334-bb20-43b9-8322-c2e847b74703" path="/var/lib/kubelet/pods/7d18b334-bb20-43b9-8322-c2e847b74703/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.412683 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4" path="/var/lib/kubelet/pods/a4d3e02e-0f46-48dd-b9ef-8cb0135eabb4/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.413369 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a725d785-3630-4adc-8417-15fceaecb250" path="/var/lib/kubelet/pods/a725d785-3630-4adc-8417-15fceaecb250/volumes" Mar 20 09:02:14 crc kubenswrapper[5136]: I0320 09:02:14.414633 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d573f1ae-c37f-487a-a059-5200647084d4" path="/var/lib/kubelet/pods/d573f1ae-c37f-487a-a059-5200647084d4/volumes" Mar 20 09:02:31 crc kubenswrapper[5136]: I0320 09:02:31.032226 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 09:02:31 crc kubenswrapper[5136]: I0320 09:02:31.043713 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-4m6bk"] Mar 20 09:02:32 crc kubenswrapper[5136]: I0320 09:02:32.436286 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2" path="/var/lib/kubelet/pods/7e3f45dd-870b-4934-b1b5-e7ec4eabc1e2/volumes" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.021389 5136 scope.go:117] "RemoveContainer" containerID="62941df7329d036b75c1f4c804a7915f68955eff793a634ef29d9182d34a9d9d" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.051040 5136 scope.go:117] "RemoveContainer" containerID="f77e438e3702b6de098fbd305814d9a4eb3df2f7161e741a8bd1bf247fc8becb" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.100942 5136 scope.go:117] "RemoveContainer" containerID="cee664fd2e2a84523a4d0f3b3405435f0b03db0425ff048065d98c5612016681" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.141247 5136 scope.go:117] "RemoveContainer" containerID="6de711a276196e50b3e83c58fdab583ad6f7407fccb722557535f82c9abd51a7" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.193039 5136 scope.go:117] "RemoveContainer" containerID="c40db219321d83dc20c2ac8a7868d46f48eda3e65baae9eafe26d50e1df17298" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.243379 5136 scope.go:117] "RemoveContainer" containerID="d695b9c2dbcf5b99f4e58724aa314335827d63b932809de7ba7a6c3af214ccca" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.302647 5136 scope.go:117] "RemoveContainer" containerID="fd6a9d8c42b14afc4c799021ad9e86afa50559ab2987d12455e53497c38a9c98" Mar 20 09:02:42 crc kubenswrapper[5136]: I0320 09:02:42.343778 5136 scope.go:117] "RemoveContainer" containerID="7b5e974893ba339d7d465bcbfaf4888d7a35fa993cb39d96df7bf3535b3e030c" Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.047986 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.057868 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-bknwr"] Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.822622 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:02:45 crc kubenswrapper[5136]: I0320 09:02:45.822709 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.043087 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.057322 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-mdczc"] Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.411261 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0869b44d-0a1b-47ae-9836-8940a31bfcf3" path="/var/lib/kubelet/pods/0869b44d-0a1b-47ae-9836-8940a31bfcf3/volumes" Mar 20 09:02:46 crc kubenswrapper[5136]: I0320 09:02:46.428203 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10383e2-004c-458c-922b-dd13574f12ff" path="/var/lib/kubelet/pods/c10383e2-004c-458c-922b-dd13574f12ff/volumes" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.299530 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:49 crc kubenswrapper[5136]: E0320 09:02:49.300651 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" containerName="oc" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.300672 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" containerName="oc" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.300947 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" containerName="oc" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.301898 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.323933 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.329631 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.331311 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.335926 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.371526 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382649 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382769 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382838 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.382892 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.437495 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486209 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486373 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486457 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.486553 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.487575 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.489011 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.541410 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.543377 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.544605 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"neutron-24c6-account-create-update-48knr\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.551158 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.648994 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.649245 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" containerID="cri-o://80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" gracePeriod=2 Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.690554 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"root-account-create-update-dvqsp\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.697146 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.697254 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.708745 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.801696 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.801806 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.802717 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.859085 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.879340 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"cinder-b8c9-account-create-update-hh6pb\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.897905 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Mar 20 09:02:49 crc kubenswrapper[5136]: I0320 09:02:49.925501 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.002521 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:50 crc kubenswrapper[5136]: E0320 09:02:50.003169 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.003194 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.003429 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerName="openstackclient" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.004426 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.019226 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.020637 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.051019 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.096558 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.098626 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.109970 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.110100 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.110551 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.147866 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.195099 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.196737 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.207305 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211754 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211833 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211913 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.211959 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.215292 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.238174 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.266442 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"glance-6249-account-create-update-mtgp6\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.275512 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.277065 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.286501 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315671 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315721 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315765 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315822 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315870 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.315905 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.316770 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.337313 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.392774 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.410567 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"nova-api-c7dc-account-create-update-bslnf\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421004 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421077 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421188 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.421216 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.422266 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.422710 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.426985 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.457703 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"nova-cell1-e664-account-create-update-fb9gm\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.461954 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.490551 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"nova-cell0-adbe-account-create-update-b276s\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.513970 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.514272 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" containerID="cri-o://491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" gracePeriod=30 Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.514642 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" containerID="cri-o://43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74" gracePeriod=30 Mar 20 09:02:50 crc kubenswrapper[5136]: E0320 09:02:50.529042 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:50 crc kubenswrapper[5136]: E0320 09:02:50.554146 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:51.054114448 +0000 UTC m=+8003.313425599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.550375 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.549917 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.556542 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.560962 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.633162 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.703362 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.736908 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.736973 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.837175 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" probeResult="failure" output="command timed out" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.852942 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.853455 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.854283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.915796 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 09:02:50 crc kubenswrapper[5136]: I0320 09:02:50.937845 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" containerID="cri-o://adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:50.963214 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:50.963647 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-2" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" containerID="cri-o://8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:50.981357 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"aodh-35ea-account-create-update-7d4sf\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.042712 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.043260 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-1" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" containerID="cri-o://ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.054954 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.057676 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.069616 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-35ea-account-create-update-6jb9f"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.089700 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.089782 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.089745362 +0000 UTC m=+8004.349056513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.123411 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.127870 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" containerID="cri-o://2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.218020 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.218662 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-1" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" containerID="cri-o://6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.293209 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.294356 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" containerID="cri-o://aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.311628 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" containerID="cri-o://4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.332715 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:51 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:51 crc kubenswrapper[5136]: else Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:51 crc kubenswrapper[5136]: fi Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:51 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:51 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:51 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:51 crc kubenswrapper[5136]: # support updates Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.337615 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-dvqsp" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.354996 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.399503 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-dqx9d"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.401181 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" containerID="cri-o://6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.407054 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-2" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" containerID="cri-o://c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" gracePeriod=300 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.436528 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/nova-scheduler-0" secret="" err="secret \"nova-nova-dockercfg-nn865\" not found" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.436671 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.437088 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" containerID="cri-o://635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.437795 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" containerID="cri-o://5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.453220 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.538861 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.539247 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b494fbb57-cd7nw" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" containerID="cri-o://293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.539881 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7b494fbb57-cd7nw" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" containerID="cri-o://b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.630885 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.636852 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.637075 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" containerID="cri-o://dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" gracePeriod=10 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.638178 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.643712 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.643793 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.652441 5136 secret.go:188] Couldn't get secret openstack/nova-scheduler-config-data: secret "nova-scheduler-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.652514 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data podName:41ed7c59-18ee-44ec-8068-ccc9e82485a6 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.152490695 +0000 UTC m=+8004.411801936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data") pod "nova-scheduler-0" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6") : secret "nova-scheduler-config-data" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.696105 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.703459 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.704746 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" cmd=["/usr/bin/pidof","ovsdb-server"] Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.704779 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-2" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.711217 5136 generic.go:334] "Generic (PLEG): container finished" podID="22659681-bc2b-4056-81d6-96b046e45712" containerID="43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.711302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerDied","Data":"43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.715894 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719472 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" containerID="cri-o://7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719906 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" containerID="cri-o://7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719947 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" containerID="cri-o://dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.719977 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" containerID="cri-o://2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.729431 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.744697 5136 generic.go:334] "Generic (PLEG): container finished" podID="48418ecc-b768-4848-b663-1a84761f5b32" containerID="6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.744757 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerDied","Data":"6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.788796 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.793949 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a276ba4e-bbab-4a83-8fd2-d77573782aa6/ovsdbserver-sb/0.log" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794005 5136 generic.go:334] "Generic (PLEG): container finished" podID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerID="adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794027 5136 generic.go:334] "Generic (PLEG): container finished" podID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" exitCode=143 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794140 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerDied","Data":"adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.794175 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerDied","Data":"4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.807154 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-lxzxf"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.812472 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvqsp" event={"ID":"7660b6b5-094d-4da5-9d34-fe85c863d887","Type":"ContainerStarted","Data":"451e44515c0cda1b30913a6cdb1ebc4f0813478346503aa740945d429ab0443d"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.813168 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-dvqsp" secret="" err="secret \"galera-openstack-cell1-dockercfg-mtswd\" not found" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.833117 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.833349 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-674ffbb556-dfk75" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" containerID="cri-o://b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.833413 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:51 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:51 crc kubenswrapper[5136]: else Mar 20 09:02:51 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:51 crc kubenswrapper[5136]: fi Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:51 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:51 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:51 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:51 crc kubenswrapper[5136]: # support updates Mar 20 09:02:51 crc kubenswrapper[5136]: Mar 20 09:02:51 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.833482 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-674ffbb556-dfk75" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" containerID="cri-o://9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a" gracePeriod=30 Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.834879 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-dvqsp" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.862670 5136 generic.go:334] "Generic (PLEG): container finished" podID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerID="ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.862733 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerDied","Data":"ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.869025 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.869113 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerID="2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.869240 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerDied","Data":"2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878144 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/ovsdbserver-sb/0.log" Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878183 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerID="8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878198 5136 generic.go:334] "Generic (PLEG): container finished" podID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" exitCode=143 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878252 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerDied","Data":"8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.878276 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerDied","Data":"c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af"} Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.878836 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:51 crc kubenswrapper[5136]: E0320 09:02:51.878895 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.378877383 +0000 UTC m=+8004.638188524 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.885957 5136 generic.go:334] "Generic (PLEG): container finished" podID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerID="aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16" exitCode=2 Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.886302 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerDied","Data":"aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16"} Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.918646 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-wjckm"] Mar 20 09:02:51 crc kubenswrapper[5136]: I0320 09:02:51.973179 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.028745 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.029063 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" containerID="cri-o://49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.029396 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" containerID="cri-o://b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.100460 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.100664 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" containerID="cri-o://9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.100883 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" containerID="cri-o://7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.133877 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.134147 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" containerID="cri-o://62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.134563 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" containerID="cri-o://8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.152311 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.152553 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" containerID="cri-o://99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.153760 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "glance" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="glance" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.156106 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" containerID="cri-o://23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.166512 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-6249-account-create-update-mtgp6" podUID="bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.169413 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "neutron" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="neutron" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.173027 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "cinder" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="cinder" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.173098 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"neutron-db-secret\\\" not found\"" pod="openstack/neutron-24c6-account-create-update-48knr" podUID="fe703c94-1aec-47a6-81a7-8510ed330866" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.174114 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-b8c9-account-create-update-hh6pb" podUID="536a487a-ae23-4eed-9bc8-221a9b85bed4" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.197589 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.197885 5136 secret.go:188] Couldn't get secret openstack/nova-scheduler-config-data: secret "nova-scheduler-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.197955 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data podName:41ed7c59-18ee-44ec-8068-ccc9e82485a6 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.197941302 +0000 UTC m=+8005.457252453 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data") pod "nova-scheduler-0" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6") : secret "nova-scheduler-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.197996 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.198015 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.198009764 +0000 UTC m=+8006.457320915 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.206182 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.230583 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.256181 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "nova_cell0" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell0" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.261549 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-adbe-account-create-update-b276s" podUID="c532fd14-6718-4c7d-9e38-c68bf7b2da6b" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.297793 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.301147 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.301421 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:52.801408065 +0000 UTC m=+8005.060719216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.313448 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:52 crc kubenswrapper[5136]: W0320 09:02:52.322159 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod254505fd_2596_4c4a_bf0a_2565e8b3ae5c.slice/crio-11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3 WatchSource:0}: Error finding container 11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3: Status 404 returned error can't find the container with id 11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.325125 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.325369 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" containerID="cri-o://fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.329125 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-1" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" containerID="cri-o://49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f" gracePeriod=299 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.341643 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "nova_api" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="nova_api" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.342679 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.342926 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" containerID="cri-o://cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.343339 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" containerID="cri-o://1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.343418 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-c7dc-account-create-update-bslnf" podUID="254505fd-2596-4c4a-bf0a-2565e8b3ae5c" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.363375 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.363578 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" containerID="cri-o://69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.364001 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" containerID="cri-o://bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.384863 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.395993 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-1" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" containerID="cri-o://50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45" gracePeriod=299 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.396125 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.397515 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/prometheus-metric-storage-0" secret="" err="secret \"metric-storage-prometheus-dockercfg-jt99d\" not found" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.420068 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.420126 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.420112691 +0000 UTC m=+8005.679423842 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.443604 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1513f332-b5c6-40ca-9c3a-4ef7b1f78672" path="/var/lib/kubelet/pods/1513f332-b5c6-40ca-9c3a-4ef7b1f78672/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.444350 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b2a0f1-96d1-4edc-a219-60194a2bf4b9" path="/var/lib/kubelet/pods/a9b2a0f1-96d1-4edc-a219-60194a2bf4b9/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.444870 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db04162b-4913-4acc-b387-d7324202a05b" path="/var/lib/kubelet/pods/db04162b-4913-4acc-b387-d7324202a05b/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.446502 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9eddce1-1338-489a-b0e9-f008c33fea0f" path="/var/lib/kubelet/pods/f9eddce1-1338-489a-b0e9-f008c33fea0f/volumes" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.462467 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.480027 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.489448 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-w7sqw"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.502185 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.508756 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" containerID="cri-o://2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.518279 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.520977 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" containerID="cri-o://a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.523069 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-thanos-prometheus-http-client-file: secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.523181 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.023165101 +0000 UTC m=+8005.282476252 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526131 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-web-config: secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526208 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.026193195 +0000 UTC m=+8005.285504346 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526758 5136 projected.go:263] Couldn't get secret openstack/prometheus-metric-storage-tls-assets-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526771 5136 projected.go:194] Error preparing data for projected volume tls-assets for pod openstack/prometheus-metric-storage-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.526793 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.026784653 +0000 UTC m=+8005.286095794 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.528250 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage: secret "prometheus-metric-storage" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.528338 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.02832125 +0000 UTC m=+8005.287632401 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.531774 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.532429 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" containerID="cri-o://5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.574982 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.591620 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:52 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: if [ -n "nova_cell1" ]; then Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell1" Mar 20 09:02:52 crc kubenswrapper[5136]: else Mar 20 09:02:52 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:52 crc kubenswrapper[5136]: fi Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:52 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:52 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:52 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:52 crc kubenswrapper[5136]: # support updates Mar 20 09:02:52 crc kubenswrapper[5136]: Mar 20 09:02:52 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.593059 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell1-db-secret\\\" not found\"" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" podUID="6d5ec1f6-0809-4582-902e-00638e6e4580" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.715915 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.716446 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" gracePeriod=30 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.758421 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759442 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759728 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.759789 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") pod \"9cefd58c-a889-4893-aa87-b106eae1c7ad\" (UID: \"9cefd58c-a889-4893-aa87-b106eae1c7ad\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.760309 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/alertmanager-metric-storage-0" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" containerID="cri-o://20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4" gracePeriod=120 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.765138 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/alertmanager-metric-storage-0" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" containerID="cri-o://c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a" gracePeriod=120 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.773927 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.777401 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.790294 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a276ba4e-bbab-4a83-8fd2-d77573782aa6/ovsdbserver-sb/0.log" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.790366 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.791350 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.805620 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.806843 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n" (OuterVolumeSpecName: "kube-api-access-w4j4n") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "kube-api-access-w4j4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.857065 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.858182 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869060 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869098 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869179 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869259 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869293 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869337 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869386 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869427 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869451 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869468 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") pod \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\" (UID: \"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.869492 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.870421 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.871339 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config" (OuterVolumeSpecName: "config") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877310 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877337 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877347 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.877358 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4j4n\" (UniqueName: \"kubernetes.io/projected/9cefd58c-a889-4893-aa87-b106eae1c7ad-kube-api-access-w4j4n\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.877419 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: E0320 09:02:52.877466 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:53.877451809 +0000 UTC m=+8006.136762950 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.878461 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.878688 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts" (OuterVolumeSpecName: "scripts") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.878752 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.882441 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.906206 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf" (OuterVolumeSpecName: "kube-api-access-kcrdf") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "kube-api-access-kcrdf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.906354 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.933058 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl" (OuterVolumeSpecName: "kube-api-access-9c7dl") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "kube-api-access-9c7dl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.948237 5136 generic.go:334] "Generic (PLEG): container finished" podID="305f3f22-2f38-44c5-8e63-1f028edce331" containerID="b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367" exitCode=0 Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.948309 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerDied","Data":"b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367"} Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.951746 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.979739 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-bslnf" event={"ID":"254505fd-2596-4c4a-bf0a-2565e8b3ae5c","Type":"ContainerStarted","Data":"11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3"} Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981858 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c7dl\" (UniqueName: \"kubernetes.io/projected/a276ba4e-bbab-4a83-8fd2-d77573782aa6-kube-api-access-9c7dl\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981881 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981912 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") on node \"crc\" " Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981925 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a276ba4e-bbab-4a83-8fd2-d77573782aa6-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:52 crc kubenswrapper[5136]: I0320 09:02:52.981938 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kcrdf\" (UniqueName: \"kubernetes.io/projected/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-kube-api-access-kcrdf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.000335 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.000948 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "nova_api" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="nova_api" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.002010 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-c7dc-account-create-update-bslnf" podUID="254505fd-2596-4c4a-bf0a-2565e8b3ae5c" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.007016 5136 generic.go:334] "Generic (PLEG): container finished" podID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerID="69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.007079 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerDied","Data":"69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.008805 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-48knr" event={"ID":"fe703c94-1aec-47a6-81a7-8510ed330866","Type":"ContainerStarted","Data":"a8837f3aa103623ab06077854fdb2ccb4185d7609e123f11e58958b51d99dfcb"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.015833 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.024057 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" containerID="cri-o://a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" gracePeriod=604800 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.026519 5136 generic.go:334] "Generic (PLEG): container finished" podID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerID="b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.026576 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerDied","Data":"b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.032305 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "9cefd58c-a889-4893-aa87-b106eae1c7ad" (UID: "9cefd58c-a889-4893-aa87-b106eae1c7ad"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.036222 5136 generic.go:334] "Generic (PLEG): container finished" podID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerID="20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.036312 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.042676 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mtgp6" event={"ID":"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1","Type":"ContainerStarted","Data":"b9a5100ed7164058172ea0371e785fef7e937e4b5c850e24c27f2580525e965a"} Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.044628 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "glance" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="glance" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.045864 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"glance-db-secret\\\" not found\"" pod="openstack/glance-6249-account-create-update-mtgp6" podUID="bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.048376 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.050543 5136 generic.go:334] "Generic (PLEG): container finished" podID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerID="62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.050669 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerDied","Data":"62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.083986 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48418ecc-b768-4848-b663-1a84761f5b32/ovsdbserver-nb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.084036 5136 generic.go:334] "Generic (PLEG): container finished" podID="48418ecc-b768-4848-b663-1a84761f5b32" containerID="50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.084604 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerDied","Data":"50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.087216 5136 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/9cefd58c-a889-4893-aa87-b106eae1c7ad-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087279 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage: secret "prometheus-metric-storage" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087328 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087313527 +0000 UTC m=+8006.346624678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087351 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-web-config: secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087411 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087392049 +0000 UTC m=+8006.346703240 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087545 5136 projected.go:263] Couldn't get secret openstack/prometheus-metric-storage-tls-assets-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087566 5136 projected.go:194] Error preparing data for projected volume tls-assets for pod openstack/prometheus-metric-storage-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087613 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087597065 +0000 UTC m=+8006.346908216 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087629 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-thanos-prometheus-http-client-file: secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.087658 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.087650137 +0000 UTC m=+8006.346961278 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.102038 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.103678 5136 generic.go:334] "Generic (PLEG): container finished" podID="9cefd58c-a889-4893-aa87-b106eae1c7ad" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" exitCode=137 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.104132 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.107241 5136 scope.go:117] "RemoveContainer" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.118147 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config" (OuterVolumeSpecName: "config") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.123424 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_7e0c945f-6773-4bf8-872d-7eb5110de79f/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.123483 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.124734 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.124770 5136 generic.go:334] "Generic (PLEG): container finished" podID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerID="6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.124828 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerDied","Data":"6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.137506 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-hh6pb" event={"ID":"536a487a-ae23-4eed-9bc8-221a9b85bed4","Type":"ContainerStarted","Data":"c5b0b0c1f851b7ef94e6535ec54695cb38543bd96eb082f0402a43b8aed2912c"} Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.139084 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "cinder" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="cinder" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.139565 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.139775 5136 generic.go:334] "Generic (PLEG): container finished" podID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerID="cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.141988 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"cinder-db-secret\\\" not found\"" pod="openstack/cinder-b8c9-account-create-update-hh6pb" podUID="536a487a-ae23-4eed-9bc8-221a9b85bed4" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.143152 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerDied","Data":"cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.161709 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.168290 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" (UID: "f93080a1-9819-48ad-a84d-ddc2d6ffe5e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173017 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173131 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173226 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.173322 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.183562 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.183711 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980") on node "crc" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.188588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.190472 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.190966 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191156 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191260 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191461 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191545 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.191669 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") pod \"7e0c945f-6773-4bf8-872d-7eb5110de79f\" (UID: \"7e0c945f-6773-4bf8-872d-7eb5110de79f\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.193964 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194501 5136 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-dns-svc\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194590 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194699 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.194768 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.195317 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.195390 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-afa66b0b-24c7-4f94-ac41-ac00aa21d980\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.197965 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config" (OuterVolumeSpecName: "config") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.203970 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts" (OuterVolumeSpecName: "scripts") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.205899 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-b276s" event={"ID":"c532fd14-6718-4c7d-9e38-c68bf7b2da6b","Type":"ContainerStarted","Data":"e4994862d77455e776d1f21a360bf4536969a6b61a32dc2dc2986ee9c7770f98"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.205969 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.206048 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.216420 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_a276ba4e-bbab-4a83-8fd2-d77573782aa6/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.216731 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.216563 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"a276ba4e-bbab-4a83-8fd2-d77573782aa6","Type":"ContainerDied","Data":"56e76ebe8499616e7e38322f1f1fa612b42b6c40fb5b893b8175391424853f23"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.219993 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk" (OuterVolumeSpecName: "kube-api-access-2xctk") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "kube-api-access-2xctk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.225069 5136 generic.go:334] "Generic (PLEG): container finished" podID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerID="9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.225183 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerDied","Data":"9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.235031 5136 scope.go:117] "RemoveContainer" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.235857 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "nova_cell0" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="nova_cell0" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.236579 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423\": container with ID starting with 80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423 not found: ID does not exist" containerID="80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.236610 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423"} err="failed to get container status \"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423\": rpc error: code = NotFound desc = could not find container \"80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423\": container with ID starting with 80cd4447da5c15081099a2322e91b181cd5f6d15be6bc9e16f91a567a1b6f423 not found: ID does not exist" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.236634 5136 scope.go:117] "RemoveContainer" containerID="adb0722e1140982d66b6bcc4b53d108b1a1da62c36d81757cff1dc3b6b31b52c" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.237077 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-cell0-db-secret\\\" not found\"" pod="openstack/nova-cell0-adbe-account-create-update-b276s" podUID="c532fd14-6718-4c7d-9e38-c68bf7b2da6b" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.240165 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" event={"ID":"6d5ec1f6-0809-4582-902e-00638e6e4580","Type":"ContainerStarted","Data":"21dbb634e022f62bb8152e34fda1da571499258dd53011f3f25fcafd54a5990f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257050 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-1_7e0c945f-6773-4bf8-872d-7eb5110de79f/ovsdbserver-sb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257122 5136 generic.go:334] "Generic (PLEG): container finished" podID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerID="49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257231 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-1" event={"ID":"7e0c945f-6773-4bf8-872d-7eb5110de79f","Type":"ContainerDied","Data":"49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.257242 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.292771 5136 generic.go:334] "Generic (PLEG): container finished" podID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.292883 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerDied","Data":"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.292947 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" event={"ID":"f93080a1-9819-48ad-a84d-ddc2d6ffe5e6","Type":"ContainerDied","Data":"8c1f27408a6ade394d6e2ab0bd5959f877552185e88f9a7307e4a1f9978accc6"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.293045 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-65bbbb4567-25rj9" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.299864 5136 generic.go:334] "Generic (PLEG): container finished" podID="9fe5d992-c030-4957-8388-763c8fa32d22" containerID="99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f" exitCode=143 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.299941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerDied","Data":"99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.298048 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.304844 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.304904 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305011 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305040 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305092 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.305540 5136 generic.go:334] "Generic (PLEG): container finished" podID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" exitCode=0 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.306208 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts" (OuterVolumeSpecName: "scripts") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.306624 5136 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/root-account-create-update-dvqsp" secret="" err="secret \"galera-openstack-cell1-dockercfg-mtswd\" not found" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.306467 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" containerID="cri-o://10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b" gracePeriod=600 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.307909 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.308066 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.308256 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") pod \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\" (UID: \"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.309054 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config" (OuterVolumeSpecName: "config") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.307755 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" containerID="cri-o://58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a" gracePeriod=600 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.309733 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" containerID="cri-o://22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc" gracePeriod=600 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310494 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310529 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310541 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310551 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e0c945f-6773-4bf8-872d-7eb5110de79f-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310562 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.310574 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2xctk\" (UniqueName: \"kubernetes.io/projected/7e0c945f-6773-4bf8-872d-7eb5110de79f-kube-api-access-2xctk\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.310661 5136 secret.go:188] Couldn't get secret openstack/nova-scheduler-config-data: secret "nova-scheduler-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.310710 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data podName:41ed7c59-18ee-44ec-8068-ccc9e82485a6 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.310691322 +0000 UTC m=+8007.570002513 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data") pod "nova-scheduler-0" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6") : secret "nova-scheduler-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.317677 5136 scope.go:117] "RemoveContainer" containerID="4df28a5a5a8e8e00719fc2794075f2ee6eccac836a858353622d063e1cd2beff" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.319698 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.321180 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-cell1-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-dvqsp" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.326437 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerDied","Data":"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a"} Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.338204 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.368228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l" (OuterVolumeSpecName: "kube-api-access-zdn2l") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "kube-api-access-zdn2l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.412403 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdn2l\" (UniqueName: \"kubernetes.io/projected/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-kube-api-access-zdn2l\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.412452 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.414629 5136 scope.go:117] "RemoveContainer" containerID="ff7aad450b5bf148e0d8e2a6a1a41eb2960ad7d591108755ada3cf41b5ab3619" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.467231 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.476495 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.514593 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") on node \"crc\" " Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.517899 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.518022 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.5179948 +0000 UTC m=+8007.777306011 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.563445 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.563687 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80") on node "crc" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.578063 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.592309 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48418ecc-b768-4848-b663-1a84761f5b32/ovsdbserver-nb/0.log" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.592401 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.605911 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-65bbbb4567-25rj9"] Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.617752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619280 5136 scope.go:117] "RemoveContainer" containerID="49b205017f1afa7852c73b13644c48ef8382cbecd7e8c8de906f466d5717a06f" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619577 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619621 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619675 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619699 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") pod \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\" (UID: \"a276ba4e-bbab-4a83-8fd2-d77573782aa6\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619902 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.619964 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.620015 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.621631 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: W0320 09:02:53.621704 5136 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a276ba4e-bbab-4a83-8fd2-d77573782aa6/volumes/kubernetes.io~secret/metrics-certs-tls-certs Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.621712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.621990 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.622010 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/48418ecc-b768-4848-b663-1a84761f5b32-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.622024 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-09f1b1d6-e123-4d46-9a6f-551776c83d80\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.622472 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts" (OuterVolumeSpecName: "scripts") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.626010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config" (OuterVolumeSpecName: "config") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.629931 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" containerID="cri-o://55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911" gracePeriod=604800 Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.661916 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5" (OuterVolumeSpecName: "kube-api-access-kzrr5") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "kube-api-access-kzrr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.666480 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "a276ba4e-bbab-4a83-8fd2-d77573782aa6" (UID: "a276ba4e-bbab-4a83-8fd2-d77573782aa6"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.681057 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "pvc-373c88d9-f88e-464e-b41f-9f601361fa14". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.699266 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:53 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: if [ -n "aodh" ]; then Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="aodh" Mar 20 09:02:53 crc kubenswrapper[5136]: else Mar 20 09:02:53 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:53 crc kubenswrapper[5136]: fi Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:53 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:53 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:53 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:53 crc kubenswrapper[5136]: # support updates Mar 20 09:02:53 crc kubenswrapper[5136]: Mar 20 09:02:53 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.702018 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"aodh-db-secret\\\" not found\"" pod="openstack/aodh-35ea-account-create-update-7d4sf" podUID="bdff16b6-0410-4448-a15c-3f22f5890d91" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.707982 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725153 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725202 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") on node \"crc\" " Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725213 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48418ecc-b768-4848-b663-1a84761f5b32-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725224 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725236 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a276ba4e-bbab-4a83-8fd2-d77573782aa6-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.725244 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzrr5\" (UniqueName: \"kubernetes.io/projected/48418ecc-b768-4848-b663-1a84761f5b32-kube-api-access-kzrr5\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.763949 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919 podName:48418ecc-b768-4848-b663-1a84761f5b32 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:54.263890993 +0000 UTC m=+8006.523202144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "ovndbcluster-nb-etc-ovn" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.768558 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.792957 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.831191 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.831634 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-373c88d9-f88e-464e-b41f-9f601361fa14" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14") on node "crc" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.837012 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.837050 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.837065 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-373c88d9-f88e-464e-b41f-9f601361fa14\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-373c88d9-f88e-464e-b41f-9f601361fa14\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.858744 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "7e0c945f-6773-4bf8-872d-7eb5110de79f" (UID: "7e0c945f-6773-4bf8-872d-7eb5110de79f"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.949592 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7e0c945f-6773-4bf8-872d-7eb5110de79f-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.949671 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: E0320 09:02:53.949716 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.949701735 +0000 UTC m=+8008.209012886 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.951968 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.975002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:53 crc kubenswrapper[5136]: I0320 09:02:53.975174 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" (UID: "5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.011016 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052647 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052684 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052693 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.052701 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/48418ecc-b768-4848-b663-1a84761f5b32-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.131740 5136 scope.go:117] "RemoveContainer" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.132652 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.152235 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154319 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage: secret "prometheus-metric-storage" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154365 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154351921 +0000 UTC m=+8008.413663072 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154583 5136 projected.go:263] Couldn't get secret openstack/prometheus-metric-storage-tls-assets-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154600 5136 projected.go:194] Error preparing data for projected volume tls-assets for pod openstack/prometheus-metric-storage-0: secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154620 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154614509 +0000 UTC m=+8008.413925660 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "tls-assets" (UniqueName: "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-tls-assets-0" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154654 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-web-config: secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154672 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154666791 +0000 UTC m=+8008.413977942 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "web-config" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-web-config" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154713 5136 secret.go:188] Couldn't get secret openstack/prometheus-metric-storage-thanos-prometheus-http-client-file: secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.154731 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:56.154725283 +0000 UTC m=+8008.414036434 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "thanos-prometheus-http-client-file" (UniqueName: "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file") pod "prometheus-metric-storage-0" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : secret "prometheus-metric-storage-thanos-prometheus-http-client-file" not found Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.155592 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.180385 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.183128 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.196670 5136 scope.go:117] "RemoveContainer" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.239703 5136 scope.go:117] "RemoveContainer" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.240031 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.240089 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.241563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299\": container with ID starting with dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299 not found: ID does not exist" containerID="dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.241593 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299"} err="failed to get container status \"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299\": rpc error: code = NotFound desc = could not find container \"dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299\": container with ID starting with dc5c877955a35dec3f0759a463cf453fa6e734d4ecff48bdc9d1ab0f3eab0299 not found: ID does not exist" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.241616 5136 scope.go:117] "RemoveContainer" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.249303 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01\": container with ID starting with 43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01 not found: ID does not exist" containerID="43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.249384 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01"} err="failed to get container status \"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01\": rpc error: code = NotFound desc = could not find container \"43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01\": container with ID starting with 43fd5cf8b218da2033f5722a025f01d99bffa05bc2ffc6e027121deb4f58ff01 not found: ID does not exist" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") pod \"fe703c94-1aec-47a6-81a7-8510ed330866\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") pod \"6d5ec1f6-0809-4582-902e-00638e6e4580\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") pod \"48418ecc-b768-4848-b663-1a84761f5b32\" (UID: \"48418ecc-b768-4848-b663-1a84761f5b32\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267688 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") pod \"6d5ec1f6-0809-4582-902e-00638e6e4580\" (UID: \"6d5ec1f6-0809-4582-902e-00638e6e4580\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.267830 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") pod \"fe703c94-1aec-47a6-81a7-8510ed330866\" (UID: \"fe703c94-1aec-47a6-81a7-8510ed330866\") " Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.268291 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.268329 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:58.26831665 +0000 UTC m=+8010.527627801 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.282825 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "fe703c94-1aec-47a6-81a7-8510ed330866" (UID: "fe703c94-1aec-47a6-81a7-8510ed330866"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.285450 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d5ec1f6-0809-4582-902e-00638e6e4580" (UID: "6d5ec1f6-0809-4582-902e-00638e6e4580"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.294786 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756" (OuterVolumeSpecName: "kube-api-access-b6756") pod "fe703c94-1aec-47a6-81a7-8510ed330866" (UID: "fe703c94-1aec-47a6-81a7-8510ed330866"). InnerVolumeSpecName "kube-api-access-b6756". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.296807 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7" (OuterVolumeSpecName: "kube-api-access-jj8b7") pod "6d5ec1f6-0809-4582-902e-00638e6e4580" (UID: "6d5ec1f6-0809-4582-902e-00638e6e4580"). InnerVolumeSpecName "kube-api-access-jj8b7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.303164 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.303362 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-965f7d5f6-cshp2" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" containerID="cri-o://5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447" gracePeriod=30 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.303618 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-965f7d5f6-cshp2" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" containerID="cri-o://9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318" gracePeriod=30 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.314790 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370241 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370274 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370439 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370479 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370506 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370529 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370565 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") pod \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\" (UID: \"65b4b8da-0eda-4a77-aeed-0a6f9350a942\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370691 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.370733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.378004 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.378105 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.378610 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") pod \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\" (UID: \"f4fd5c29-d308-41d0-9781-9b6d9625c19c\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379545 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fe703c94-1aec-47a6-81a7-8510ed330866-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379564 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6756\" (UniqueName: \"kubernetes.io/projected/fe703c94-1aec-47a6-81a7-8510ed330866-kube-api-access-b6756\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379576 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d5ec1f6-0809-4582-902e-00638e6e4580-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.379586 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jj8b7\" (UniqueName: \"kubernetes.io/projected/6d5ec1f6-0809-4582-902e-00638e6e4580-kube-api-access-jj8b7\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.381445 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.381637 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts" (OuterVolumeSpecName: "scripts") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.381950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config" (OuterVolumeSpecName: "config") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385176 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-1"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385477 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385640 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385652 5136 generic.go:334] "Generic (PLEG): container finished" podID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerID="10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385615 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385766 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.385777 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392552 5136 generic.go:334] "Generic (PLEG): container finished" podID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392622 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerDied","Data":"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392651 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"65b4b8da-0eda-4a77-aeed-0a6f9350a942","Type":"ContainerDied","Data":"80db3f58b1ebfbb9a5e2a7946e85f8a9a484a8c2e28fb3c3b16dbcc6876113ea"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392667 5136 scope.go:117] "RemoveContainer" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.392750 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.405989 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f" (OuterVolumeSpecName: "kube-api-access-p255f") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "kube-api-access-p255f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.429920 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r" (OuterVolumeSpecName: "kube-api-access-z6j8r") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "kube-api-access-z6j8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.447043 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70745a35-fe6f-4248-ac87-970763afe00e" path="/var/lib/kubelet/pods/70745a35-fe6f-4248-ac87-970763afe00e/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.447885 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" path="/var/lib/kubelet/pods/7e0c945f-6773-4bf8-872d-7eb5110de79f/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.451070 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cefd58c-a889-4893-aa87-b106eae1c7ad" path="/var/lib/kubelet/pods/9cefd58c-a889-4893-aa87-b106eae1c7ad/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.462395 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" path="/var/lib/kubelet/pods/a276ba4e-bbab-4a83-8fd2-d77573782aa6/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.463033 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" path="/var/lib/kubelet/pods/f93080a1-9819-48ad-a84d-ddc2d6ffe5e6/volumes" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.464982 5136 generic.go:334] "Generic (PLEG): container finished" podID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerID="c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.465192 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.485832 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.485882 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486275 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486290 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486308 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486316 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486328 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486335 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486348 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486355 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486373 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486378 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486390 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486396 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486404 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486409 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486423 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="init" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486430 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="init" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486439 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486445 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486453 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486460 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486471 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486476 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486483 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486489 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486498 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486504 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.486511 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486516 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486683 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486695 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486701 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerName="nova-scheduler-scheduler" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486714 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="a276ba4e-bbab-4a83-8fd2-d77573782aa6" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486725 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486735 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486743 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f93080a1-9819-48ad-a84d-ddc2d6ffe5e6" containerName="dnsmasq-dns" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486753 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486764 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="ovsdbserver-sb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486773 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" containerName="nova-cell1-novncproxy-novncproxy" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486781 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486790 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e0c945f-6773-4bf8-872d-7eb5110de79f" containerName="openstack-network-exporter" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.486801 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="48418ecc-b768-4848-b663-1a84761f5b32" containerName="ovsdbserver-nb" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487361 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487376 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487387 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3e97-account-create-update-6qvr2"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.487470 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.490075 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-7d4sf" event={"ID":"bdff16b6-0410-4448-a15c-3f22f5890d91","Type":"ContainerStarted","Data":"420f494bfdb1a8eee2db9127145e2ede22bd33f1f2aa87cae0d3548617967daa"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.492436 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.505878 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.507332 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.510349 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513659 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513702 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p255f\" (UniqueName: \"kubernetes.io/projected/f4fd5c29-d308-41d0-9781-9b6d9625c19c-kube-api-access-p255f\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513715 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6j8r\" (UniqueName: \"kubernetes.io/projected/65b4b8da-0eda-4a77-aeed-0a6f9350a942-kube-api-access-z6j8r\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513725 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.513737 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4fd5c29-d308-41d0-9781-9b6d9625c19c-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.528175 5136 generic.go:334] "Generic (PLEG): container finished" podID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerID="a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.528268 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerDied","Data":"a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.542679 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-24c6-account-create-update-48knr" event={"ID":"fe703c94-1aec-47a6-81a7-8510ed330866","Type":"ContainerDied","Data":"a8837f3aa103623ab06077854fdb2ccb4185d7609e123f11e58958b51d99dfcb"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.542773 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-24c6-account-create-update-48knr" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.564046 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.591148 5136 scope.go:117] "RemoveContainer" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.591254 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" event={"ID":"6d5ec1f6-0809-4582-902e-00638e6e4580","Type":"ContainerDied","Data":"21dbb634e022f62bb8152e34fda1da571499258dd53011f3f25fcafd54a5990f"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.591333 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-e664-account-create-update-fb9gm" Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.594593 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961\": container with ID starting with bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961 not found: ID does not exist" containerID="bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.594650 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961"} err="failed to get container status \"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961\": rpc error: code = NotFound desc = could not find container \"bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961\": container with ID starting with bf5e034bd14558262e03fbe126fcd810c8c7101bbdef0bc2012ee0b90cfbc961 not found: ID does not exist" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.601875 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-gnx9m"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.614897 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") pod \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615334 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") pod \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615370 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") pod \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\" (UID: \"41ed7c59-18ee-44ec-8068-ccc9e82485a6\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615751 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.615796 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.616526 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.616600 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.623363 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.624929 5136 generic.go:334] "Generic (PLEG): container finished" podID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerID="5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.624998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerDied","Data":"5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.627882 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636501 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_f4fd5c29-d308-41d0-9781-9b6d9625c19c/ovsdbserver-nb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636590 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"f4fd5c29-d308-41d0-9781-9b6d9625c19c","Type":"ContainerDied","Data":"5870d6b24a1657a079a89b9e9211d461a22b66269225de506dabd34bacc879f1"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636624 5136 scope.go:117] "RemoveContainer" containerID="2ffece60b271290211f6f3963d1642000676cfce31547f3f28dd8ecf96867815" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.636726 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.638804 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm" (OuterVolumeSpecName: "kube-api-access-pchhm") pod "41ed7c59-18ee-44ec-8068-ccc9e82485a6" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6"). InnerVolumeSpecName "kube-api-access-pchhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.647283 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.647530 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" containerID="cri-o://72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" gracePeriod=60 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.648264 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-2_5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/ovsdbserver-sb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.648381 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-2" event={"ID":"5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9","Type":"ContainerDied","Data":"e68e48705b4cdb3e57af6e933adb8006e7437ee5218f249bb6e11769fe0ee800"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.648477 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-2" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.655697 5136 generic.go:334] "Generic (PLEG): container finished" podID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" exitCode=0 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.655785 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerDied","Data":"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.655875 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.693940 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-1_48418ecc-b768-4848-b663-1a84761f5b32/ovsdbserver-nb/0.log" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.694455 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-1" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.701337 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-1" event={"ID":"48418ecc-b768-4848-b663-1a84761f5b32","Type":"ContainerDied","Data":"e05bc317ee3c118e98b670a0ea0d818712ef16b644c13d4a02ef27c03d16c608"} Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.718096 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736206 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736280 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736376 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736428 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736465 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736579 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736727 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736781 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736868 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.736899 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737164 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737215 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737424 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737491 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.737658 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pchhm\" (UniqueName: \"kubernetes.io/projected/41ed7c59-18ee-44ec-8068-ccc9e82485a6-kube-api-access-pchhm\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.739921 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "48418ecc-b768-4848-b663-1a84761f5b32" (UID: "48418ecc-b768-4848-b663-1a84761f5b32"). InnerVolumeSpecName "pvc-247c0e17-cb61-42ba-9ec9-5459166a5919". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.740190 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.745527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.755953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.758226 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.763305 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.764677 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.771133 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.776005 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.785936 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:54 crc kubenswrapper[5136]: E0320 09:02:54.786877 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-pfccs], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/heat-3e97-account-create-update-tkzc8" podUID="28644e17-7977-4824-aa44-364f4558d0ad" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.795548 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.803785 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-2bbqx"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.805973 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.818933 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.819129 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" containerID="cri-o://3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" gracePeriod=60 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.822020 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847" (OuterVolumeSpecName: "kube-api-access-62847") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "kube-api-access-62847". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.822743 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"heat-3e97-account-create-update-tkzc8\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.822854 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config" (OuterVolumeSpecName: "config") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.824161 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out" (OuterVolumeSpecName: "config-out") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.824258 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.826900 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"root-account-create-update-qtl55\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.835551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840483 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840508 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62847\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-kube-api-access-62847\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840519 5136 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840540 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840563 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") on node \"crc\" " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840577 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840586 5136 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c5271b0d-ac1b-480c-b4b8-3b634246ae62-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840596 5136 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840607 5136 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c5271b0d-ac1b-480c-b4b8-3b634246ae62-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840615 5136 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c5271b0d-ac1b-480c-b4b8-3b634246ae62-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840630 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") on node \"crc\" " Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840639 5136 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.840649 5136 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.841936 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.842142 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7dbf74ffb7-gw5nj" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" containerID="cri-o://c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" gracePeriod=60 Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.856579 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.933765 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.961916 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-24c6-account-create-update-48knr"] Mar 20 09:02:54 crc kubenswrapper[5136]: I0320 09:02:54.998008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41ed7c59-18ee-44ec-8068-ccc9e82485a6" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.023244 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.032901 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-e664-account-create-update-fb9gm"] Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.046432 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.060010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.065767 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data" (OuterVolumeSpecName: "config-data") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.071107 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34 podName:c5271b0d-ac1b-480c-b4b8-3b634246ae62 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:55.571082544 +0000 UTC m=+8007.830393695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "prometheus-metric-storage-db" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.105551 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.155001 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.155318 5136 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.155334 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.187648 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data" (OuterVolumeSpecName: "config-data") pod "41ed7c59-18ee-44ec-8068-ccc9e82485a6" (UID: "41ed7c59-18ee-44ec-8068-ccc9e82485a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.229355 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.48:5671: connect: connection refused" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.261123 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.262361 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-247c0e17-cb61-42ba-9ec9-5459166a5919" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919") on node "crc" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.261507 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41ed7c59-18ee-44ec-8068-ccc9e82485a6-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.332557 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.154:9292/healthcheck\": read tcp 10.217.0.2:57768->10.217.1.154:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.332702 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.154:9292/healthcheck\": read tcp 10.217.0.2:57766->10.217.1.154:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.337100 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.337270 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81") on node "crc" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.339833 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.365181 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-247c0e17-cb61-42ba-9ec9-5459166a5919\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.365207 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.365218 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4e9f8b0-bb91-4220-ac59-634bb9535a81\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.371946 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "65b4b8da-0eda-4a77-aeed-0a6f9350a942" (UID: "65b4b8da-0eda-4a77-aeed-0a6f9350a942"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.383176 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.1.153:9292/healthcheck\": read tcp 10.217.0.2:37904->10.217.1.153:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.383469 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.1.153:9292/healthcheck\": read tcp 10.217.0.2:37898->10.217.1.153:9292: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.469287 5136 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/65b4b8da-0eda-4a77-aeed-0a6f9350a942-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.473933 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.517023 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "f4fd5c29-d308-41d0-9781-9b6d9625c19c" (UID: "f4fd5c29-d308-41d0-9781-9b6d9625c19c"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.528918 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config" (OuterVolumeSpecName: "web-config") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.571160 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.571182 5136 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c5271b0d-ac1b-480c-b4b8-3b634246ae62-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.571192 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f4fd5c29-d308-41d0-9781-9b6d9625c19c-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.571249 5136 configmap.go:193] Couldn't get configMap openstack/openstack-cell1-scripts: configmap "openstack-cell1-scripts" not found Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.571295 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts podName:7660b6b5-094d-4da5-9d34-fe85c863d887 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:59.57128137 +0000 UTC m=+8011.830592521 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts") pod "root-account-create-update-dvqsp" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887") : configmap "openstack-cell1-scripts" not found Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.594193 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.595588 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.596703 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.596738 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.635985 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.1.104:8776/healthcheck\": read tcp 10.217.0.2:48976->10.217.1.104:8776: read: connection reset by peer" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.673137 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") pod \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\" (UID: \"c5271b0d-ac1b-480c-b4b8-3b634246ae62\") " Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.720261 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c5271b0d-ac1b-480c-b4b8-3b634246ae62" (UID: "c5271b0d-ac1b-480c-b4b8-3b634246ae62"). InnerVolumeSpecName "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.754762 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c5271b0d-ac1b-480c-b4b8-3b634246ae62","Type":"ContainerDied","Data":"441fbbe54f0f16d0c91d190250a9aea863086641a75258b3146f19278093050a"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.754917 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.768507 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-adbe-account-create-update-b276s" event={"ID":"c532fd14-6718-4c7d-9e38-c68bf7b2da6b","Type":"ContainerDied","Data":"e4994862d77455e776d1f21a360bf4536969a6b61a32dc2dc2986ee9c7770f98"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.768551 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4994862d77455e776d1f21a360bf4536969a6b61a32dc2dc2986ee9c7770f98" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.774373 5136 generic.go:334] "Generic (PLEG): container finished" podID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerID="bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.774592 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerDied","Data":"bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.776211 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" " Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.793423 5136 generic.go:334] "Generic (PLEG): container finished" podID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerID="9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.793557 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerDied","Data":"9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.795435 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b8c9-account-create-update-hh6pb" event={"ID":"536a487a-ae23-4eed-9bc8-221a9b85bed4","Type":"ContainerDied","Data":"c5b0b0c1f851b7ef94e6535ec54695cb38543bd96eb082f0402a43b8aed2912c"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.795462 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b0b0c1f851b7ef94e6535ec54695cb38543bd96eb082f0402a43b8aed2912c" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.803626 5136 generic.go:334] "Generic (PLEG): container finished" podID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerID="9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.804020 5136 generic.go:334] "Generic (PLEG): container finished" podID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerID="5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.804151 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerDied","Data":"9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.804274 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerDied","Data":"5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.817930 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"53db9385-e63d-49a6-8dab-854c4bcd01f1","Type":"ContainerDied","Data":"b199eb0de00dd4b5665ed78ec596b43fb8d24fc2002bd3f4dd356a32c51b4138"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.817970 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b199eb0de00dd4b5665ed78ec596b43fb8d24fc2002bd3f4dd356a32c51b4138" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820394 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820561 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-d394ecb7-8fc1-4066-a753-5896ab167a34" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34") on node "crc" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820653 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c7dc-account-create-update-bslnf" event={"ID":"254505fd-2596-4c4a-bf0a-2565e8b3ae5c","Type":"ContainerDied","Data":"11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.820698 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11ff32a7e8d869cd5ce4e704be044bbc65e503615b99ae96e1f014e5fa2103a3" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.827316 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"ea7881c5-b719-41b0-8046-249f7fdb6f61","Type":"ContainerDied","Data":"0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.827356 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d3c98cd3c1e5c913df6b6870b3ffb39782f9b6156d6f724a7d48a2b4387a567" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.830739 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-dvqsp" event={"ID":"7660b6b5-094d-4da5-9d34-fe85c863d887","Type":"ContainerDied","Data":"451e44515c0cda1b30913a6cdb1ebc4f0813478346503aa740945d429ab0443d"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.830776 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="451e44515c0cda1b30913a6cdb1ebc4f0813478346503aa740945d429ab0443d" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.832865 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6249-account-create-update-mtgp6" event={"ID":"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1","Type":"ContainerDied","Data":"b9a5100ed7164058172ea0371e785fef7e937e4b5c850e24c27f2580525e965a"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.832907 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9a5100ed7164058172ea0371e785fef7e937e4b5c850e24c27f2580525e965a" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.835117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"d4bc380a-4852-40d3-b03d-67f762c778d3","Type":"ContainerDied","Data":"8744afbb6fc5b78de44cc1ad3a2d2c06bbc7e574d3d24b9b63a1c4c9c4199a2b"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.835143 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8744afbb6fc5b78de44cc1ad3a2d2c06bbc7e574d3d24b9b63a1c4c9c4199a2b" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.837777 5136 generic.go:334] "Generic (PLEG): container finished" podID="9fe5d992-c030-4957-8388-763c8fa32d22" containerID="23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.837839 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerDied","Data":"23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.840504 5136 generic.go:334] "Generic (PLEG): container finished" podID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerID="8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.840556 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerDied","Data":"8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.843680 5136 generic.go:334] "Generic (PLEG): container finished" podID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerID="7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.843765 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerDied","Data":"7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.845453 5136 generic.go:334] "Generic (PLEG): container finished" podID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" exitCode=0 Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.845535 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerDied","Data":"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.846464 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-35ea-account-create-update-7d4sf" event={"ID":"bdff16b6-0410-4448-a15c-3f22f5890d91","Type":"ContainerDied","Data":"420f494bfdb1a8eee2db9127145e2ede22bd33f1f2aa87cae0d3548617967daa"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.846489 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="420f494bfdb1a8eee2db9127145e2ede22bd33f1f2aa87cae0d3548617967daa" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.848534 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"41ed7c59-18ee-44ec-8068-ccc9e82485a6","Type":"ContainerDied","Data":"78428f788f49ec304b60513d248e5c1585ac2ca613eb54e675058865189a70f5"} Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.848586 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:55 crc kubenswrapper[5136]: I0320 09:02:55.878010 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d394ecb7-8fc1-4066-a753-5896ab167a34\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.994879 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:02:55 crc kubenswrapper[5136]: E0320 09:02:55.994955 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:02:59.994940797 +0000 UTC m=+8012.254251948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.047252 5136 scope.go:117] "RemoveContainer" containerID="6359501b6448986da36467b6a23a3fd5909f20740da745780317887a779e734a" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.049349 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.060763 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.070607 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.090467 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095391 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095430 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095452 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095527 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095593 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095647 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.095752 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096226 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096335 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") pod \"ea7881c5-b719-41b0-8046-249f7fdb6f61\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096373 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096398 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096419 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") pod \"ea7881c5-b719-41b0-8046-249f7fdb6f61\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096448 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") pod \"ea7881c5-b719-41b0-8046-249f7fdb6f61\" (UID: \"ea7881c5-b719-41b0-8046-249f7fdb6f61\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096493 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096513 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") pod \"d4bc380a-4852-40d3-b03d-67f762c778d3\" (UID: \"d4bc380a-4852-40d3-b03d-67f762c778d3\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.096541 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") pod \"53db9385-e63d-49a6-8dab-854c4bcd01f1\" (UID: \"53db9385-e63d-49a6-8dab-854c4bcd01f1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.098367 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.098889 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.099014 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db" (OuterVolumeSpecName: "alertmanager-metric-storage-db") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "alertmanager-metric-storage-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.099220 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.099756 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.115176 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.116857 5136 scope.go:117] "RemoveContainer" containerID="8fc531019af740166b849284cf77209f3d16d2b70c219b2f80048bdb08d14be3" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.123228 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.144384 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw" (OuterVolumeSpecName: "kube-api-access-c6mxw") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "kube-api-access-c6mxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.146449 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume" (OuterVolumeSpecName: "config-volume") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.148239 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out" (OuterVolumeSpecName: "config-out") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.149194 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.150574 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8" (OuterVolumeSpecName: "kube-api-access-rrfj8") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "kube-api-access-rrfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.151027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4" (OuterVolumeSpecName: "kube-api-access-92bl4") pod "ea7881c5-b719-41b0-8046-249f7fdb6f61" (UID: "ea7881c5-b719-41b0-8046-249f7fdb6f61"). InnerVolumeSpecName "kube-api-access-92bl4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.155057 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.49:5671: connect: connection refused" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.157409 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-2"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.168806 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.173313 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.176340 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-1"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.183746 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.193790 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198131 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") pod \"bdff16b6-0410-4448-a15c-3f22f5890d91\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198161 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") pod \"bdff16b6-0410-4448-a15c-3f22f5890d91\" (UID: \"bdff16b6-0410-4448-a15c-3f22f5890d91\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198246 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") pod \"536a487a-ae23-4eed-9bc8-221a9b85bed4\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198317 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") pod \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198344 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") pod \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198419 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") pod \"536a487a-ae23-4eed-9bc8-221a9b85bed4\" (UID: \"536a487a-ae23-4eed-9bc8-221a9b85bed4\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198504 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") pod \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\" (UID: \"bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198526 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") pod \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\" (UID: \"254505fd-2596-4c4a-bf0a-2565e8b3ae5c\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198978 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198989 5136 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-volume\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.198998 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199006 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d4bc380a-4852-40d3-b03d-67f762c778d3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199015 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrfj8\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-kube-api-access-rrfj8\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199024 5136 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/53db9385-e63d-49a6-8dab-854c4bcd01f1-tls-assets\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199032 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92bl4\" (UniqueName: \"kubernetes.io/projected/ea7881c5-b719-41b0-8046-249f7fdb6f61-kube-api-access-92bl4\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199040 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c6mxw\" (UniqueName: \"kubernetes.io/projected/d4bc380a-4852-40d3-b03d-67f762c778d3-kube-api-access-c6mxw\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199050 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d4bc380a-4852-40d3-b03d-67f762c778d3-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199057 5136 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-config-out\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.199065 5136 reconciler_common.go:293] "Volume detached for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/53db9385-e63d-49a6-8dab-854c4bcd01f1-alertmanager-metric-storage-db\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.206125 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "254505fd-2596-4c4a-bf0a-2565e8b3ae5c" (UID: "254505fd-2596-4c4a-bf0a-2565e8b3ae5c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.208851 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" (UID: "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.208917 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.208976 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "536a487a-ae23-4eed-9bc8-221a9b85bed4" (UID: "536a487a-ae23-4eed-9bc8-221a9b85bed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.209141 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bdff16b6-0410-4448-a15c-3f22f5890d91" (UID: "bdff16b6-0410-4448-a15c-3f22f5890d91"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.209335 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l" (OuterVolumeSpecName: "kube-api-access-hcj6l") pod "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" (UID: "bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1"). InnerVolumeSpecName "kube-api-access-hcj6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.209459 5136 scope.go:117] "RemoveContainer" containerID="c38165dadbc17f4d62a65b6c603ad906516c2a8ffc342559c38f59e3a77ec1af" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.213230 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.215227 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv" (OuterVolumeSpecName: "kube-api-access-475lv") pod "536a487a-ae23-4eed-9bc8-221a9b85bed4" (UID: "536a487a-ae23-4eed-9bc8-221a9b85bed4"). InnerVolumeSpecName "kube-api-access-475lv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.217243 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq" (OuterVolumeSpecName: "kube-api-access-rfzdq") pod "254505fd-2596-4c4a-bf0a-2565e8b3ae5c" (UID: "254505fd-2596-4c4a-bf0a-2565e8b3ae5c"). InnerVolumeSpecName "kube-api-access-rfzdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.222725 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r" (OuterVolumeSpecName: "kube-api-access-cfq5r") pod "bdff16b6-0410-4448-a15c-3f22f5890d91" (UID: "bdff16b6-0410-4448-a15c-3f22f5890d91"). InnerVolumeSpecName "kube-api-access-cfq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.226470 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.228595 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.229065 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.236677 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.244778 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.245153 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52" (OuterVolumeSpecName: "mysql-db") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.246609 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.253088 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.261451 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.269085 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.269429 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.274196 5136 scope.go:117] "RemoveContainer" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.277511 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.279709 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ea7881c5-b719-41b0-8046-249f7fdb6f61" (UID: "ea7881c5-b719-41b0-8046-249f7fdb6f61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.302310 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config" (OuterVolumeSpecName: "cluster-tls-config") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "cluster-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.304979 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305025 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305071 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") pod \"28644e17-7977-4824-aa44-364f4558d0ad\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305112 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305130 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305160 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305190 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305215 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305238 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305261 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305313 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305339 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305382 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305407 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305431 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305463 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305497 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") pod \"7660b6b5-094d-4da5-9d34-fe85c863d887\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305558 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305588 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305617 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") pod \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305646 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305680 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") pod \"7660b6b5-094d-4da5-9d34-fe85c863d887\" (UID: \"7660b6b5-094d-4da5-9d34-fe85c863d887\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305707 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305787 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") pod \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\" (UID: \"c532fd14-6718-4c7d-9e38-c68bf7b2da6b\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305831 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") pod \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\" (UID: \"0007e89c-1f52-4ac8-beed-59d6db6e60fd\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305873 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") pod \"6345b1ce-d7d2-420d-8631-e42fd662d790\" (UID: \"6345b1ce-d7d2-420d-8631-e42fd662d790\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305898 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305926 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305956 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") pod \"f402e588-3dec-48be-8b5b-5aeaa571b372\" (UID: \"f402e588-3dec-48be-8b5b-5aeaa571b372\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.305980 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306000 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") pod \"28644e17-7977-4824-aa44-364f4558d0ad\" (UID: \"28644e17-7977-4824-aa44-364f4558d0ad\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306024 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") pod \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\" (UID: \"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb\") " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306137 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data" (OuterVolumeSpecName: "config-data") pod "ea7881c5-b719-41b0-8046-249f7fdb6f61" (UID: "ea7881c5-b719-41b0-8046-249f7fdb6f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306503 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bdff16b6-0410-4448-a15c-3f22f5890d91-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306522 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/536a487a-ae23-4eed-9bc8-221a9b85bed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306535 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306546 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ea7881c5-b719-41b0-8046-249f7fdb6f61-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306557 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306568 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306581 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-475lv\" (UniqueName: \"kubernetes.io/projected/536a487a-ae23-4eed-9bc8-221a9b85bed4-kube-api-access-475lv\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306596 5136 reconciler_common.go:293] "Volume detached for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-cluster-tls-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306609 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcj6l\" (UniqueName: \"kubernetes.io/projected/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1-kube-api-access-hcj6l\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306621 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfzdq\" (UniqueName: \"kubernetes.io/projected/254505fd-2596-4c4a-bf0a-2565e8b3ae5c-kube-api-access-rfzdq\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306648 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") on node \"crc\" " Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.306661 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfq5r\" (UniqueName: \"kubernetes.io/projected/bdff16b6-0410-4448-a15c-3f22f5890d91-kube-api-access-cfq5r\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.308091 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.308841 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx" (OuterVolumeSpecName: "kube-api-access-v7bfx") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "kube-api-access-v7bfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.308857 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "d4bc380a-4852-40d3-b03d-67f762c778d3" (UID: "d4bc380a-4852-40d3-b03d-67f762c778d3"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.310279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.314776 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.321028 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.321959 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6" (OuterVolumeSpecName: "kube-api-access-5rjh6") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "kube-api-access-5rjh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.327555 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs" (OuterVolumeSpecName: "kube-api-access-pfccs") pod "28644e17-7977-4824-aa44-364f4558d0ad" (UID: "28644e17-7977-4824-aa44-364f4558d0ad"). InnerVolumeSpecName "kube-api-access-pfccs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.328009 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts" (OuterVolumeSpecName: "scripts") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.328829 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs" (OuterVolumeSpecName: "logs") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.329113 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs" (OuterVolumeSpecName: "logs") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.329459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs" (OuterVolumeSpecName: "logs") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.329801 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c532fd14-6718-4c7d-9e38-c68bf7b2da6b" (UID: "c532fd14-6718-4c7d-9e38-c68bf7b2da6b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.330083 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2" (OuterVolumeSpecName: "kube-api-access-sgnb2") pod "c532fd14-6718-4c7d-9e38-c68bf7b2da6b" (UID: "c532fd14-6718-4c7d-9e38-c68bf7b2da6b"). InnerVolumeSpecName "kube-api-access-sgnb2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.330128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7660b6b5-094d-4da5-9d34-fe85c863d887" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.334231 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c" (OuterVolumeSpecName: "kube-api-access-c4l7c") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "kube-api-access-c4l7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.342534 5136 scope.go:117] "RemoveContainer" containerID="6504065e281b8c5e6e76cf9517fba24d633b6c7805c447e42fbc49093a42beeb" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.343454 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx" (OuterVolumeSpecName: "kube-api-access-fxpnx") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "kube-api-access-fxpnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.344025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "28644e17-7977-4824-aa44-364f4558d0ad" (UID: "28644e17-7977-4824-aa44-364f4558d0ad"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.343424 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts" (OuterVolumeSpecName: "scripts") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.379987 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config" (OuterVolumeSpecName: "web-config") pod "53db9385-e63d-49a6-8dab-854c4bcd01f1" (UID: "53db9385-e63d-49a6-8dab-854c4bcd01f1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.405765 5136 scope.go:117] "RemoveContainer" containerID="50054ee6901ece61fe4a75813bd8a9abcbe38aad68d2fc8adceaeed3cbddce45" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.421859 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422089 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422788 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7bfx\" (UniqueName: \"kubernetes.io/projected/f402e588-3dec-48be-8b5b-5aeaa571b372-kube-api-access-v7bfx\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422885 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6345b1ce-d7d2-420d-8631-e42fd662d790-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.422944 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423014 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0007e89c-1f52-4ac8-beed-59d6db6e60fd-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423073 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fxpnx\" (UniqueName: \"kubernetes.io/projected/6345b1ce-d7d2-420d-8631-e42fd662d790-kube-api-access-fxpnx\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423255 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423345 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7660b6b5-094d-4da5-9d34-fe85c863d887-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423413 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.423727 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.425896 5136 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/53db9385-e63d-49a6-8dab-854c4bcd01f1-web-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426040 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgnb2\" (UniqueName: \"kubernetes.io/projected/c532fd14-6718-4c7d-9e38-c68bf7b2da6b-kube-api-access-sgnb2\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426123 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rjh6\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-kube-api-access-5rjh6\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426188 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426253 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/28644e17-7977-4824-aa44-364f4558d0ad-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426329 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f402e588-3dec-48be-8b5b-5aeaa571b372-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426399 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4l7c\" (UniqueName: \"kubernetes.io/projected/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-kube-api-access-c4l7c\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.426460 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d4bc380a-4852-40d3-b03d-67f762c778d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.428603 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfccs\" (UniqueName: \"kubernetes.io/projected/28644e17-7977-4824-aa44-364f4558d0ad-kube-api-access-pfccs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.548911 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k" (OuterVolumeSpecName: "kube-api-access-c948k") pod "7660b6b5-094d-4da5-9d34-fe85c863d887" (UID: "7660b6b5-094d-4da5-9d34-fe85c863d887"). InnerVolumeSpecName "kube-api-access-c948k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.549028 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.571532 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c948k\" (UniqueName: \"kubernetes.io/projected/7660b6b5-094d-4da5-9d34-fe85c863d887-kube-api-access-c948k\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.571586 5136 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0007e89c-1f52-4ac8-beed-59d6db6e60fd-etc-swift\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.588965 5136 scope.go:117] "RemoveContainer" containerID="58ad47341d36382588ada0b35a93996f0bd176fcc8c2283488644a65072e6d6a" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.597253 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41ed7c59-18ee-44ec-8068-ccc9e82485a6" path="/var/lib/kubelet/pods/41ed7c59-18ee-44ec-8068-ccc9e82485a6/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.609454 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48418ecc-b768-4848-b663-1a84761f5b32" path="/var/lib/kubelet/pods/48418ecc-b768-4848-b663-1a84761f5b32/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.610183 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9" path="/var/lib/kubelet/pods/5e4a4d49-fefc-4b3b-b98a-3068ed8ff1d9/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.626674 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65b4b8da-0eda-4a77-aeed-0a6f9350a942" path="/var/lib/kubelet/pods/65b4b8da-0eda-4a77-aeed-0a6f9350a942/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.627185 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d5ec1f6-0809-4582-902e-00638e6e4580" path="/var/lib/kubelet/pods/6d5ec1f6-0809-4582-902e-00638e6e4580/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.627574 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f0f0206-8535-4184-ae20-349019be47b2" path="/var/lib/kubelet/pods/7f0f0206-8535-4184-ae20-349019be47b2/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.642570 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87521532-0534-4e37-9c80-809877f2a744" path="/var/lib/kubelet/pods/87521532-0534-4e37-9c80-809877f2a744/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.657687 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" path="/var/lib/kubelet/pods/c5271b0d-ac1b-480c-b4b8-3b634246ae62/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.658599 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5f2ce8c-5295-423c-a81f-511d7abd0495" path="/var/lib/kubelet/pods/d5f2ce8c-5295-423c-a81f-511d7abd0495/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.663919 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.664294 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52") on node "crc" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.670306 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4fd5c29-d308-41d0-9781-9b6d9625c19c" path="/var/lib/kubelet/pods/f4fd5c29-d308-41d0-9781-9b6d9625c19c/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.674065 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe703c94-1aec-47a6-81a7-8510ed330866" path="/var/lib/kubelet/pods/fe703c94-1aec-47a6-81a7-8510ed330866/volumes" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.676771 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-beffb70b-21a3-41ac-adb4-058bbd7d2c52\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.781694 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.790986 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data" (OuterVolumeSpecName: "config-data") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.806008 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.845010 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.871685 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.877383 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.882320 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.882365 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.882377 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.896147 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-674ffbb556-dfk75" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.919320 5136 generic.go:334] "Generic (PLEG): container finished" podID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerID="1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89" exitCode=0 Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.922716 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data" (OuterVolumeSpecName: "config-data") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.925432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.925893 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.926901 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.935396 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-965f7d5f6-cshp2" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.941129 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.944030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data" (OuterVolumeSpecName: "config-data") pod "6345b1ce-d7d2-420d-8631-e42fd662d790" (UID: "6345b1ce-d7d2-420d-8631-e42fd662d790"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.955482 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-adbe-account-create-update-b276s" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.959021 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c7dc-account-create-update-bslnf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.965526 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b8c9-account-create-update-hh6pb" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.966945 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-3e97-account-create-update-tkzc8" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.968275 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.969486 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-35ea-account-create-update-7d4sf" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.969922 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-dvqsp" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.975892 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6249-account-create-update-mtgp6" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.976908 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.977468 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983861 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983886 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983896 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983904 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983913 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:56 crc kubenswrapper[5136]: I0320 09:02:56.983922 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6345b1ce-d7d2-420d-8631-e42fd662d790-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.023637 5136 scope.go:117] "RemoveContainer" containerID="22f88ba3ba68ee1437a03f43cb79cd30dcc82808fe8518d87eaa412f688babdc" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.059518 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.085379 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.086177 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"6345b1ce-d7d2-420d-8631-e42fd662d790","Type":"ContainerDied","Data":"118347a18261bbd9231e7dfffb48c3b8dac9d276aacba0e195348777f97cd490"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088784 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088855 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f402e588-3dec-48be-8b5b-5aeaa571b372","Type":"ContainerDied","Data":"0bd47d70acb181d12064205fc44377458fa88b9280bd44fea4624c5f756f1398"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088871 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-674ffbb556-dfk75" event={"ID":"7db26f77-c83b-4eb6-b513-6b0b2be6ebeb","Type":"ContainerDied","Data":"102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088885 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.088897 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerDied","Data":"1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.089660 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0007e89c-1f52-4ac8-beed-59d6db6e60fd" (UID: "0007e89c-1f52-4ac8-beed-59d6db6e60fd"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.090367 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.091404 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.091846 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" containerID="cri-o://22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.091922 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" containerID="cri-o://65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.092927 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" containerID="cri-o://d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093342 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" containerID="cri-o://aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093401 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" containerID="cri-o://bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093642 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-965f7d5f6-cshp2" event={"ID":"0007e89c-1f52-4ac8-beed-59d6db6e60fd","Type":"ContainerDied","Data":"d98661f6d09e3f930d057fd7583b14591e52595630d9c487ff1f51b1c2eb81d0"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093702 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"9fe5d992-c030-4957-8388-763c8fa32d22","Type":"ContainerDied","Data":"e5d5c7c5c5992aa7583b39735a8b9b809168a5e579da62b57e92455fa830342d"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093718 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"25254bce-daf4-4521-ae48-e6c53e458cb4","Type":"ContainerDied","Data":"b4041a772e07ae38dd21f6daf35e3b02ea073600ec0a68c9ba11fe62a374af18"} Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.093742 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094270 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="mysql-bootstrap" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094287 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="mysql-bootstrap" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094301 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094307 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094320 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094326 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094338 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094344 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094356 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094361 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094370 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094376 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094385 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094391 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094404 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094412 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094472 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094479 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094490 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.094495 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.094503 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098739 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098771 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098820 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098834 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098840 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098850 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098858 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098893 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098900 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098909 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098914 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098924 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098931 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="init-config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.098945 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.098996 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099017 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099024 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099085 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099091 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099099 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099149 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.099159 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.099170 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100163 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100189 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100224 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100235 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100245 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100255 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" containerName="nova-cell1-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100266 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" containerName="nova-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100278 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100307 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" containerName="glance-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100317 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100324 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="config-reloader" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100331 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="thanos-sidecar" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100341 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" containerName="galera" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100351 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" containerName="glance-httpd" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100378 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100390 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5271b0d-ac1b-480c-b4b8-3b634246ae62" containerName="prometheus" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100399 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" containerName="proxy-server" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100411 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" containerName="placement-log" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.100418 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" containerName="cinder-api" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101097 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101185 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101197 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101295 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101369 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-cron-29566621-n7g7j"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101386 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101445 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.101515 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.104404 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-69dd969bf5-bw8cr" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" containerID="cri-o://0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.104543 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-w7hnn" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.110705 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" containerID="cri-o://6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.116954 5136 scope.go:117] "RemoveContainer" containerID="10b1e7850200894bb4a977e72087deccfe8ba698c99b51dc2f4eca430f875e7b" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.118526 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data" (OuterVolumeSpecName: "config-data") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.184012 5136 scope.go:117] "RemoveContainer" containerID="9a1c12a00febf49412d459684283c5a7557c491ef07270676850c0c1b6e79a69" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.186701 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.186733 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.186775 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187033 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187052 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187088 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187120 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187149 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187179 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.187198 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.198432 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f402e588-3dec-48be-8b5b-5aeaa571b372" (UID: "f402e588-3dec-48be-8b5b-5aeaa571b372"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.198793 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs" (OuterVolumeSpecName: "logs") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.200459 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.208611 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts" (OuterVolumeSpecName: "scripts") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.208641 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.209027 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.210496 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q" (OuterVolumeSpecName: "kube-api-access-pt97q") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "kube-api-access-pt97q". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.210827 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.215099 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.217302 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.217370 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.218182 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0007e89c-1f52-4ac8-beed-59d6db6e60fd-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.218694 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.243101 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5" (OuterVolumeSpecName: "kube-api-access-hg2m5") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "kube-api-access-hg2m5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.245745 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.250701 5136 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 20 09:02:57 crc kubenswrapper[5136]: container &Container{Name:mariadb-account-create-update,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-mariadb:39fc4cb70f516d8e9b48225bc0a253ef,Command:[/bin/sh -c #!/bin/bash Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: MYSQL_CMD="mysql -h -u root -P 3306" Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: if [ -n "" ]; then Mar 20 09:02:57 crc kubenswrapper[5136]: GRANT_DATABASE="" Mar 20 09:02:57 crc kubenswrapper[5136]: else Mar 20 09:02:57 crc kubenswrapper[5136]: GRANT_DATABASE="*" Mar 20 09:02:57 crc kubenswrapper[5136]: fi Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: # going for maximum compatibility here: Mar 20 09:02:57 crc kubenswrapper[5136]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Mar 20 09:02:57 crc kubenswrapper[5136]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Mar 20 09:02:57 crc kubenswrapper[5136]: # 3. create user with CREATE but then do all password and TLS with ALTER to Mar 20 09:02:57 crc kubenswrapper[5136]: # support updates Mar 20 09:02:57 crc kubenswrapper[5136]: Mar 20 09:02:57 crc kubenswrapper[5136]: $MYSQL_CMD < logger="UnhandledError" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.252171 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"openstack-mariadb-root-db-secret\\\" not found\"" pod="openstack/root-account-create-update-qtl55" podUID="0e905e98-1ffd-4a08-bf51-e89f2d589595" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.303279 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.309994 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" (UID: "7db26f77-c83b-4eb6-b513-6b0b2be6ebeb"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.319973 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320319 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320368 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") pod \"9fe5d992-c030-4957-8388-763c8fa32d22\" (UID: \"9fe5d992-c030-4957-8388-763c8fa32d22\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320422 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320458 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") pod \"25254bce-daf4-4521-ae48-e6c53e458cb4\" (UID: \"25254bce-daf4-4521-ae48-e6c53e458cb4\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320859 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320874 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320884 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fe5d992-c030-4957-8388-763c8fa32d22-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320892 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320901 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pt97q\" (UniqueName: \"kubernetes.io/projected/25254bce-daf4-4521-ae48-e6c53e458cb4-kube-api-access-pt97q\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320912 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f402e588-3dec-48be-8b5b-5aeaa571b372-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.320963 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321089 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321100 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-httpd-run\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321108 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg2m5\" (UniqueName: \"kubernetes.io/projected/9fe5d992-c030-4957-8388-763c8fa32d22-kube-api-access-hg2m5\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321117 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fe5d992-c030-4957-8388-763c8fa32d22-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.321144 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs" (OuterVolumeSpecName: "logs") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.323629 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts" (OuterVolumeSpecName: "scripts") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.330728 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.333894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.341011 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data" (OuterVolumeSpecName: "config-data") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.346526 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.347641 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data" (OuterVolumeSpecName: "config-data") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.355508 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-w7hnn" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.373985 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fe5d992-c030-4957-8388-763c8fa32d22" (UID: "9fe5d992-c030-4957-8388-763c8fa32d22"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.408024 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" containerID="cri-o://c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" gracePeriod=30 Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.420837 5136 scope.go:117] "RemoveContainer" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.421337 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79\": container with ID starting with fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79 not found: ID does not exist" containerID="fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.421382 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79"} err="failed to get container status \"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79\": rpc error: code = NotFound desc = could not find container \"fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79\": container with ID starting with fd2177fb853527e451d4514e1f42c3c8bc13b2a57df6321d454b27c1bc0cfe79 not found: ID does not exist" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.421408 5136 scope.go:117] "RemoveContainer" containerID="8e6a89b054dab23d4263c5fb97b6aba8bc51276e7bd2c8d9be34c61f68879a63" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422390 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422443 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422603 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422662 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.422710 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") pod \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\" (UID: \"fba581c3-e77a-4db7-ac50-bdb17291b2c7\") " Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423614 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423646 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423660 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423672 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423689 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423702 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/25254bce-daf4-4521-ae48-e6c53e458cb4-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.423713 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fe5d992-c030-4957-8388-763c8fa32d22-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.424383 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs" (OuterVolumeSpecName: "logs") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.472138 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j" (OuterVolumeSpecName: "kube-api-access-jvh5j") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "kube-api-access-jvh5j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.479096 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25254bce-daf4-4521-ae48-e6c53e458cb4" (UID: "25254bce-daf4-4521-ae48-e6c53e458cb4"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.527340 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fba581c3-e77a-4db7-ac50-bdb17291b2c7-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.527370 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvh5j\" (UniqueName: \"kubernetes.io/projected/fba581c3-e77a-4db7-ac50-bdb17291b2c7-kube-api-access-jvh5j\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.527502 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25254bce-daf4-4521-ae48-e6c53e458cb4-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.530572 5136 scope.go:117] "RemoveContainer" containerID="62b81b8fc1d95273635ec6d0f69c524950ac024e8fd9b6ac1d7381fe6f428b6f" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.551000 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data" (OuterVolumeSpecName: "config-data") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.551117 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.569424 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.595411 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "fba581c3-e77a-4db7-ac50-bdb17291b2c7" (UID: "fba581c3-e77a-4db7-ac50-bdb17291b2c7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.619470 5136 scope.go:117] "RemoveContainer" containerID="bd88353eb3bfead6453753b043892b43c76148c22dbdd8749c35d5213cf8d63b" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.619535 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c7dc-account-create-update-bslnf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.628647 5136 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.628671 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.628680 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fba581c3-e77a-4db7-ac50-bdb17291b2c7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.661992 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.666376 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-965f7d5f6-cshp2"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.710059 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.725089 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6249-account-create-update-mtgp6"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.759664 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.779794 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-35ea-account-create-update-7d4sf"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.811378 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.811997 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7db26f77_c83b_4eb6_b513_6b0b2be6ebeb.slice/crio-102bdb1b6280dfecffcbef32145242d8452dab99236c089131fd7b267b6cf255\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf402e588_3dec_48be_8b5b_5aeaa571b372.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf402e588_3dec_48be_8b5b_5aeaa571b372.slice/crio-0bd47d70acb181d12064205fc44377458fa88b9280bd44fea4624c5f756f1398\": RecentStats: unable to find data in memory cache]" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.815258 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/alertmanager-metric-storage-0" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" containerName="alertmanager" probeResult="failure" output="Get \"http://10.217.1.179:9093/-/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.823442 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-adbe-account-create-update-b276s"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.835623 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.847964 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.849189 5136 scope.go:117] "RemoveContainer" containerID="69c8be45f764ed420f7bbef558c7c52b3207d932e0c8d1c5e50585f4ba78387d" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.875878 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.877441 5136 scope.go:117] "RemoveContainer" containerID="9849b01109acdf6259a4119de8fce764d067a8b917a7d5b6965b6bd00e1aa60a" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.879440 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b8c9-account-create-update-hh6pb"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.900049 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.911088 5136 scope.go:117] "RemoveContainer" containerID="b1983019e3cc484fe8f15d4854d502ecd0a69d384bd0f1cd05cd048f9cc159a0" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.920331 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-dvqsp"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.928794 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.937974 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.953022 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.958042 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-3e97-account-create-update-tkzc8"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.963646 5136 scope.go:117] "RemoveContainer" containerID="9a6fe348ea134460d09531b2378ad3abce82d81a5457e369dbee025701fbe318" Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.971456 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.980518 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.980908 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.985089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtl55" event={"ID":"0e905e98-1ffd-4a08-bf51-e89f2d589595","Type":"ContainerStarted","Data":"089c4f1542f895a8c64c404f0f6eac60fa62acf5d0cfe1a1c79aebd6a5cc806f"} Mar 20 09:02:57 crc kubenswrapper[5136]: E0320 09:02:57.988249 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 20 09:02:57 crc kubenswrapper[5136]: I0320 09:02:57.996142 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.000944 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.001150 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.001560 5136 generic.go:334] "Generic (PLEG): container finished" podID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerID="aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83" exitCode=2 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.001700 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerDied","Data":"aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.007488 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.007680 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.008382 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.018593 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023160 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023342 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d" exitCode=2 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023400 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023484 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023591 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.023703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.026930 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.027880 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"fba581c3-e77a-4db7-ac50-bdb17291b2c7","Type":"ContainerDied","Data":"a030a1b8ce6d7dd841b09834af231537b00b2434ef87a4f05635fba547adb80f"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.028050 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.036078 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.046885 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-674ffbb556-dfk75"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.058174 5136 generic.go:334] "Generic (PLEG): container finished" podID="11508a60-8214-4811-898f-9542eee208d5" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" exitCode=0 Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.058294 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerDied","Data":"2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c"} Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.068016 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.167:8000/healthcheck\": read tcp 10.217.0.2:52814->10.217.1.167:8000: read: connection reset by peer" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.068709 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-3614-account-create-update-w7hnn" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.069608 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.069639 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.109606 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-7dbf74ffb7-gw5nj" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.166:8004/healthcheck\": read tcp 10.217.0.2:51806->10.217.1.166:8004: read: connection reset by peer" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.146684 5136 scope.go:117] "RemoveContainer" containerID="5e5a77d7952567153e8f93b101532ad12cc95f1597c77efe34c080e974b22447" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.260040 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.264717 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.275957 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.289979 5136 scope.go:117] "RemoveContainer" containerID="23592f2e3f685cf11f8e09b90281731a317e3331b51d973536b5b6cf9ce01a69" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.320377 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345573 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345624 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") pod \"11508a60-8214-4811-898f-9542eee208d5\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345708 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345797 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") pod \"11508a60-8214-4811-898f-9542eee208d5\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345948 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.345987 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") pod \"11508a60-8214-4811-898f-9542eee208d5\" (UID: \"11508a60-8214-4811-898f-9542eee208d5\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.346043 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") pod \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\" (UID: \"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b\") " Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.346475 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:58 crc kubenswrapper[5136]: E0320 09:02:58.347133 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data podName:e2c9ab46-3143-4472-a606-cd75def78f41 nodeName:}" failed. No retries permitted until 2026-03-20 09:03:06.34711039 +0000 UTC m=+8018.606421561 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data") pod "rabbitmq-cell1-server-0" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41") : configmap "rabbitmq-cell1-config-data" not found Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.351458 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.389875 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "11508a60-8214-4811-898f-9542eee208d5" (UID: "11508a60-8214-4811-898f-9542eee208d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.390682 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89" (OuterVolumeSpecName: "kube-api-access-xjh89") pod "11508a60-8214-4811-898f-9542eee208d5" (UID: "11508a60-8214-4811-898f-9542eee208d5"). InnerVolumeSpecName "kube-api-access-xjh89". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.390855 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf" (OuterVolumeSpecName: "kube-api-access-hgdmf") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "kube-api-access-hgdmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.403995 5136 scope.go:117] "RemoveContainer" containerID="99aa025dc61faebaa87d0e9d2a4856c44ddf012f862d7c369fe941dabbd9836f" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.424232 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.440934 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0007e89c-1f52-4ac8-beed-59d6db6e60fd" path="/var/lib/kubelet/pods/0007e89c-1f52-4ac8-beed-59d6db6e60fd/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.441950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data" (OuterVolumeSpecName: "config-data") pod "11508a60-8214-4811-898f-9542eee208d5" (UID: "11508a60-8214-4811-898f-9542eee208d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.442059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.443501 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25254bce-daf4-4521-ae48-e6c53e458cb4" path="/var/lib/kubelet/pods/25254bce-daf4-4521-ae48-e6c53e458cb4/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.447452 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254505fd-2596-4c4a-bf0a-2565e8b3ae5c" path="/var/lib/kubelet/pods/254505fd-2596-4c4a-bf0a-2565e8b3ae5c/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.452794 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28644e17-7977-4824-aa44-364f4558d0ad" path="/var/lib/kubelet/pods/28644e17-7977-4824-aa44-364f4558d0ad/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.453640 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="536a487a-ae23-4eed-9bc8-221a9b85bed4" path="/var/lib/kubelet/pods/536a487a-ae23-4eed-9bc8-221a9b85bed4/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.454161 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53db9385-e63d-49a6-8dab-854c4bcd01f1" path="/var/lib/kubelet/pods/53db9385-e63d-49a6-8dab-854c4bcd01f1/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.454942 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6345b1ce-d7d2-420d-8631-e42fd662d790" path="/var/lib/kubelet/pods/6345b1ce-d7d2-420d-8631-e42fd662d790/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.456450 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7660b6b5-094d-4da5-9d34-fe85c863d887" path="/var/lib/kubelet/pods/7660b6b5-094d-4da5-9d34-fe85c863d887/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.457126 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7db26f77-c83b-4eb6-b513-6b0b2be6ebeb" path="/var/lib/kubelet/pods/7db26f77-c83b-4eb6-b513-6b0b2be6ebeb/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.457547 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458258 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458275 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjh89\" (UniqueName: \"kubernetes.io/projected/11508a60-8214-4811-898f-9542eee208d5-kube-api-access-xjh89\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458285 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgdmf\" (UniqueName: \"kubernetes.io/projected/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-api-access-hgdmf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458321 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11508a60-8214-4811-898f-9542eee208d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.458392 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.457972 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8494da27-4688-4c23-b4bd-77a8cac9ae31" path="/var/lib/kubelet/pods/8494da27-4688-4c23-b4bd-77a8cac9ae31/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.460210 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1" path="/var/lib/kubelet/pods/bc8e8c3e-7cc8-48ce-af41-8c4b0b801cd1/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.460690 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdff16b6-0410-4448-a15c-3f22f5890d91" path="/var/lib/kubelet/pods/bdff16b6-0410-4448-a15c-3f22f5890d91/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.461160 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c532fd14-6718-4c7d-9e38-c68bf7b2da6b" path="/var/lib/kubelet/pods/c532fd14-6718-4c7d-9e38-c68bf7b2da6b/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.461686 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4bc380a-4852-40d3-b03d-67f762c778d3" path="/var/lib/kubelet/pods/d4bc380a-4852-40d3-b03d-67f762c778d3/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.466573 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea7881c5-b719-41b0-8046-249f7fdb6f61" path="/var/lib/kubelet/pods/ea7881c5-b719-41b0-8046-249f7fdb6f61/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.471320 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f402e588-3dec-48be-8b5b-5aeaa571b372" path="/var/lib/kubelet/pods/f402e588-3dec-48be-8b5b-5aeaa571b372/volumes" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.494698 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.500148 5136 scope.go:117] "RemoveContainer" containerID="7ecfa88277a19c2fc4a9782c7beb9c21c6c1a5a38d56723b3e67fc0044f8bbb4" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.503970 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.509729 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.523059 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" (UID: "27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.544182 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-3614-account-create-update-w7hnn"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.551375 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.561109 5136 scope.go:117] "RemoveContainer" containerID="9e5b58fa90ab6a9a965276a68d4ee135aa252e61fbc159c5d0aa6f6134637333" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.561673 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") pod \"0e905e98-1ffd-4a08-bf51-e89f2d589595\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.561716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") pod \"0e905e98-1ffd-4a08-bf51-e89f2d589595\" (UID: \"0e905e98-1ffd-4a08-bf51-e89f2d589595\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.562270 5136 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.563998 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0e905e98-1ffd-4a08-bf51-e89f2d589595" (UID: "0e905e98-1ffd-4a08-bf51-e89f2d589595"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.566086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp" (OuterVolumeSpecName: "kube-api-access-vwswp") pod "0e905e98-1ffd-4a08-bf51-e89f2d589595" (UID: "0e905e98-1ffd-4a08-bf51-e89f2d589595"). InnerVolumeSpecName "kube-api-access-vwswp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.595389 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.611387 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.630027 5136 scope.go:117] "RemoveContainer" containerID="1e2a347a5b7fd1a421ed6a7c665114567c818ec00d533ffab87b5e587c7ecf89" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664121 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664287 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664330 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664516 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664569 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.664590 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") pod \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\" (UID: \"25dc915a-6dbf-4622-bd14-1b372cfe9acc\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.665111 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwswp\" (UniqueName: \"kubernetes.io/projected/0e905e98-1ffd-4a08-bf51-e89f2d589595-kube-api-access-vwswp\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.665139 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0e905e98-1ffd-4a08-bf51-e89f2d589595-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.668033 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6" (OuterVolumeSpecName: "kube-api-access-grml6") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "kube-api-access-grml6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.670150 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.702149 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data" (OuterVolumeSpecName: "config-data") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.711690 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.721401 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.737416 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "25dc915a-6dbf-4622-bd14-1b372cfe9acc" (UID: "25dc915a-6dbf-4622-bd14-1b372cfe9acc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766658 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766692 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766701 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766709 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766717 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/25dc915a-6dbf-4622-bd14-1b372cfe9acc-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.766726 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grml6\" (UniqueName: \"kubernetes.io/projected/25dc915a-6dbf-4622-bd14-1b372cfe9acc-kube-api-access-grml6\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.799479 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.835640 5136 scope.go:117] "RemoveContainer" containerID="cd5663a9b617be114b64e32a8582baab8d6015f76d7bc3afb172624a4c98b3c7" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868049 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868175 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868212 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868236 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868275 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.868340 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") pod \"d397a968-433e-4de9-8ed7-d0247aa5e775\" (UID: \"d397a968-433e-4de9-8ed7-d0247aa5e775\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.887194 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.887244 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc" (OuterVolumeSpecName: "kube-api-access-vrrlc") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "kube-api-access-vrrlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.916274 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.922894 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.930643 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.944566 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data" (OuterVolumeSpecName: "config-data") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.944587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d397a968-433e-4de9-8ed7-d0247aa5e775" (UID: "d397a968-433e-4de9-8ed7-d0247aa5e775"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970409 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970495 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.970879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.971120 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") pod \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\" (UID: \"6fcd7752-be4a-45af-b12d-f4ee6275b3b3\") " Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.971537 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.971573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data" (OuterVolumeSpecName: "config-data") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972233 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972318 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vrrlc\" (UniqueName: \"kubernetes.io/projected/d397a968-433e-4de9-8ed7-d0247aa5e775-kube-api-access-vrrlc\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972352 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972363 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972371 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972379 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972388 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.972398 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d397a968-433e-4de9-8ed7-d0247aa5e775-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:58 crc kubenswrapper[5136]: I0320 09:02:58.975315 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48" (OuterVolumeSpecName: "kube-api-access-9xz48") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "kube-api-access-9xz48". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.005918 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.027338 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "6fcd7752-be4a-45af-b12d-f4ee6275b3b3" (UID: "6fcd7752-be4a-45af-b12d-f4ee6275b3b3"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.073493 5136 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.073524 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xz48\" (UniqueName: \"kubernetes.io/projected/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-kube-api-access-9xz48\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.073534 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6fcd7752-be4a-45af-b12d-f4ee6275b3b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.082992 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22659681-bc2b-4056-81d6-96b046e45712/ovn-northd/0.log" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.083055 5136 generic.go:334] "Generic (PLEG): container finished" podID="22659681-bc2b-4056-81d6-96b046e45712" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" exitCode=139 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.083156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerDied","Data":"491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.084259 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qtl55" event={"ID":"0e905e98-1ffd-4a08-bf51-e89f2d589595","Type":"ContainerDied","Data":"089c4f1542f895a8c64c404f0f6eac60fa62acf5d0cfe1a1c79aebd6a5cc806f"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.084339 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qtl55" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087077 5136 generic.go:334] "Generic (PLEG): container finished" podID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" exitCode=0 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087170 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerDied","Data":"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087188 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" event={"ID":"25dc915a-6dbf-4622-bd14-1b372cfe9acc","Type":"ContainerDied","Data":"3210b5910d0a16311abb43994ee90a4669047a646cbfeed19742a3d4c20fe707"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087204 5136 scope.go:117] "RemoveContainer" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.087302 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-55f46cdf9d-2mcgl" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.092382 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b","Type":"ContainerDied","Data":"5287b5546ce2593541455a64057c115d269b5e5f8d4df65c154feacababa85d9"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.092411 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096696 5136 generic.go:334] "Generic (PLEG): container finished" podID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" exitCode=0 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096749 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerDied","Data":"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096769 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"6fcd7752-be4a-45af-b12d-f4ee6275b3b3","Type":"ContainerDied","Data":"9cf9cadd89e2b28a829e6e81692bf2693c40f2c59fbdfc4c88536b7ae65a16d3"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.096853 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.104643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"11508a60-8214-4811-898f-9542eee208d5","Type":"ContainerDied","Data":"a3d117a77c0748e946444e66ceaf44864c92bbd1f42406d2aed51d06c0feb90d"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.104734 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115048 5136 generic.go:334] "Generic (PLEG): container finished" podID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" exitCode=0 Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115089 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerDied","Data":"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115114 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7dbf74ffb7-gw5nj" event={"ID":"d397a968-433e-4de9-8ed7-d0247aa5e775","Type":"ContainerDied","Data":"ca4ec121b137203fb91f384175b7088e11aa189eac35f5c700b19c4a087e9179"} Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.115178 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7dbf74ffb7-gw5nj" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.144624 5136 scope.go:117] "RemoveContainer" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.148280 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.150595 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5\": container with ID starting with 3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5 not found: ID does not exist" containerID="3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.150634 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5"} err="failed to get container status \"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5\": rpc error: code = NotFound desc = could not find container \"3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5\": container with ID starting with 3c6f66a5f458fac9c5e4e3c5c401fe49e93f98ab9783d1eb99b59cc0b35fe3d5 not found: ID does not exist" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.150660 5136 scope.go:117] "RemoveContainer" containerID="aa45ed833f1221b9bb131eddde949a0bc22ff821678a36dab8182db02d897f83" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.162500 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-55f46cdf9d-2mcgl"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.171559 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.179494 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.188424 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.202433 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.202981 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.204333 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.204624 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.204655 5136 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.205004 5136 scope.go:117] "RemoveContainer" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.229114 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.240417 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qtl55"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.245589 5136 scope.go:117] "RemoveContainer" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.247021 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.252909 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53\": container with ID starting with 6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53 not found: ID does not exist" containerID="6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.252949 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53"} err="failed to get container status \"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53\": rpc error: code = NotFound desc = could not find container \"6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53\": container with ID starting with 6d1d3496245e4a95aee0cd94c2cf161e20d1b4ed2af711b68df528966e0cfc53 not found: ID does not exist" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.252974 5136 scope.go:117] "RemoveContainer" containerID="2a26b1b064e0a10c158983613bb278d761e4a07e8187dca49260154436ad446c" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.252970 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7dbf74ffb7-gw5nj"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.259872 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.264982 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.277190 5136 scope.go:117] "RemoveContainer" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.307727 5136 scope.go:117] "RemoveContainer" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" Mar 20 09:02:59 crc kubenswrapper[5136]: E0320 09:02:59.308843 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be\": container with ID starting with c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be not found: ID does not exist" containerID="c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.308907 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be"} err="failed to get container status \"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be\": rpc error: code = NotFound desc = could not find container \"c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be\": container with ID starting with c94da038d11912247034fdd2fdddaddf98271b2ae30e1d7514d6c19c3c8f08be not found: ID does not exist" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.379141 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22659681-bc2b-4056-81d6-96b046e45712/ovn-northd/0.log" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.379216 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.478792 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479220 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479286 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479331 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479360 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479456 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.479638 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") pod \"22659681-bc2b-4056-81d6-96b046e45712\" (UID: \"22659681-bc2b-4056-81d6-96b046e45712\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.481469 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts" (OuterVolumeSpecName: "scripts") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.482874 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config" (OuterVolumeSpecName: "config") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.483098 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.485503 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9" (OuterVolumeSpecName: "kube-api-access-2w7g9") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "kube-api-access-2w7g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.511966 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.568015 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.577899 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "22659681-bc2b-4056-81d6-96b046e45712" (UID: "22659681-bc2b-4056-81d6-96b046e45712"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.579938 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.581969 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582012 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582024 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w7g9\" (UniqueName: \"kubernetes.io/projected/22659681-bc2b-4056-81d6-96b046e45712-kube-api-access-2w7g9\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582034 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/22659681-bc2b-4056-81d6-96b046e45712-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582043 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582066 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/22659681-bc2b-4056-81d6-96b046e45712-ovn-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.582075 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/22659681-bc2b-4056-81d6-96b046e45712-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683026 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683083 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683116 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683138 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683162 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683719 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683751 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683780 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683880 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683915 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.683939 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") pod \"e2c9ab46-3143-4472-a606-cd75def78f41\" (UID: \"e2c9ab46-3143-4472-a606-cd75def78f41\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.684090 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.684533 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.684797 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.686352 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.686727 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm" (OuterVolumeSpecName: "kube-api-access-zckzm") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "kube-api-access-zckzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.688658 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.690883 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.690899 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info" (OuterVolumeSpecName: "pod-info") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.704430 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (OuterVolumeSpecName: "persistence") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.721950 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data" (OuterVolumeSpecName: "config-data") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.732640 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf" (OuterVolumeSpecName: "server-conf") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.770831 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "e2c9ab46-3143-4472-a606-cd75def78f41" (UID: "e2c9ab46-3143-4472-a606-cd75def78f41"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786888 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786912 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e2c9ab46-3143-4472-a606-cd75def78f41-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786921 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786931 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zckzm\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-kube-api-access-zckzm\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786940 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786948 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786957 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e2c9ab46-3143-4472-a606-cd75def78f41-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786986 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.786998 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e2c9ab46-3143-4472-a606-cd75def78f41-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.787007 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e2c9ab46-3143-4472-a606-cd75def78f41-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.806386 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.806575 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e") on node "crc" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.810860 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887791 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887884 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887922 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.887955 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.888032 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.888054 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.888490 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.891956 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.892856 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") pod \"9cf0c76a-c284-44b5-9aee-293de926cb90\" (UID: \"9cf0c76a-c284-44b5-9aee-293de926cb90\") " Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893328 5136 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-kolla-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893352 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b1d4ebee-f985-448b-ad7d-461dd09cec0e\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893367 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-generated\") on node \"crc\" DevicePath \"\"" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893410 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.893534 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.894627 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t" (OuterVolumeSpecName: "kube-api-access-trz2t") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "kube-api-access-trz2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.918021 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.981670 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:02:59 crc kubenswrapper[5136]: I0320 09:02:59.994172 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e" (OuterVolumeSpecName: "mysql-db") pod "9cf0c76a-c284-44b5-9aee-293de926cb90" (UID: "9cf0c76a-c284-44b5-9aee-293de926cb90"). InnerVolumeSpecName "pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.009957 5136 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.010047 5136 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data podName:804d1bff-7c63-45a1-bf1a-68f3eedb6ac7 nodeName:}" failed. No retries permitted until 2026-03-20 09:03:08.010026825 +0000 UTC m=+8020.269337976 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data") pod "rabbitmq-server-0" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7") : configmap "rabbitmq-config-data" not found Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010129 5136 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010203 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") on node \"crc\" " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010218 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-trz2t\" (UniqueName: \"kubernetes.io/projected/9cf0c76a-c284-44b5-9aee-293de926cb90-kube-api-access-trz2t\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010228 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-config-data-default\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010238 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cf0c76a-c284-44b5-9aee-293de926cb90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.010247 5136 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9cf0c76a-c284-44b5-9aee-293de926cb90-operator-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.030840 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.030981 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e") on node "crc" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.112485 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-be7b4e56-cdec-4573-b53f-75ba7c3c139e\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127071 5136 generic.go:334] "Generic (PLEG): container finished" podID="e2c9ab46-3143-4472-a606-cd75def78f41" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" exitCode=0 Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127131 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerDied","Data":"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e2c9ab46-3143-4472-a606-cd75def78f41","Type":"ContainerDied","Data":"25cfdad0d21b0236b303f371c98360bf9fc45a61724844374d71ad0ec3fcc738"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127174 5136 scope.go:117] "RemoveContainer" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.127267 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132331 5136 generic.go:334] "Generic (PLEG): container finished" podID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" exitCode=0 Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132409 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerDied","Data":"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132408 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.132436 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"9cf0c76a-c284-44b5-9aee-293de926cb90","Type":"ContainerDied","Data":"a3ca3c82737ff9666b59eaa26c9fcedbcaf8829fd4670afceb4988d0c1b4a157"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.137919 5136 generic.go:334] "Generic (PLEG): container finished" podID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerID="55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911" exitCode=0 Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.137998 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerDied","Data":"55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.151523 5136 scope.go:117] "RemoveContainer" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.155048 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_22659681-bc2b-4056-81d6-96b046e45712/ovn-northd/0.log" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.155117 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"22659681-bc2b-4056-81d6-96b046e45712","Type":"ContainerDied","Data":"967dc1b56184016d81cec7d8f7dbb1450bc76817bce0736f60753fc52d0a9ea2"} Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.155205 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.191969 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.205102 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.211828 5136 scope.go:117] "RemoveContainer" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.214643 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80\": container with ID starting with a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80 not found: ID does not exist" containerID="a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214671 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80"} err="failed to get container status \"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80\": rpc error: code = NotFound desc = could not find container \"a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80\": container with ID starting with a198c7f8e8377652b482647dfc5c30af750d97e67707d6fc3e807132b202cf80 not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214689 5136 scope.go:117] "RemoveContainer" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.214966 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9\": container with ID starting with ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9 not found: ID does not exist" containerID="ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214987 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9"} err="failed to get container status \"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9\": rpc error: code = NotFound desc = could not find container \"ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9\": container with ID starting with ce1a8383fb01688c0f1caeb5e4320574fa6dbdbc0ec482dd28b879fc3c091ab9 not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.214999 5136 scope.go:117] "RemoveContainer" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.229029 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.245886 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.251705 5136 scope.go:117] "RemoveContainer" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.253378 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.264413 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.294635 5136 scope.go:117] "RemoveContainer" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.295232 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe\": container with ID starting with c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe not found: ID does not exist" containerID="c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295274 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe"} err="failed to get container status \"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe\": rpc error: code = NotFound desc = could not find container \"c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe\": container with ID starting with c562b712e06d05b672e1e1731b31ed1ba7a31139b9610dfe4465330dd773fabe not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295305 5136 scope.go:117] "RemoveContainer" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" Mar 20 09:03:00 crc kubenswrapper[5136]: E0320 09:03:00.295786 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77\": container with ID starting with 3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77 not found: ID does not exist" containerID="3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295841 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77"} err="failed to get container status \"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77\": rpc error: code = NotFound desc = could not find container \"3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77\": container with ID starting with 3c2ce29bf443e1d81d3f6ce701a5e5efbf230b68ea1176fdf10bcba0865d6d77 not found: ID does not exist" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.295859 5136 scope.go:117] "RemoveContainer" containerID="43d8180654ac711b0a6c655f92be552ad6bb0d4e4426596385b695958afa2b74" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.329726 5136 scope.go:117] "RemoveContainer" containerID="491de9d04cd0e7b590564b9e33c6d2d10b2f1cc31960d95aac6f47504fb3ad0d" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.393254 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.413693 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e905e98-1ffd-4a08-bf51-e89f2d589595" path="/var/lib/kubelet/pods/0e905e98-1ffd-4a08-bf51-e89f2d589595/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.417576 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11508a60-8214-4811-898f-9542eee208d5" path="/var/lib/kubelet/pods/11508a60-8214-4811-898f-9542eee208d5/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.418740 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22659681-bc2b-4056-81d6-96b046e45712" path="/var/lib/kubelet/pods/22659681-bc2b-4056-81d6-96b046e45712/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.419445 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" path="/var/lib/kubelet/pods/25dc915a-6dbf-4622-bd14-1b372cfe9acc/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.420781 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" path="/var/lib/kubelet/pods/27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.422269 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" path="/var/lib/kubelet/pods/6fcd7752-be4a-45af-b12d-f4ee6275b3b3/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.423512 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" path="/var/lib/kubelet/pods/9cf0c76a-c284-44b5-9aee-293de926cb90/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.424672 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fe5d992-c030-4957-8388-763c8fa32d22" path="/var/lib/kubelet/pods/9fe5d992-c030-4957-8388-763c8fa32d22/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.425887 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" path="/var/lib/kubelet/pods/d397a968-433e-4de9-8ed7-d0247aa5e775/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.427058 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" path="/var/lib/kubelet/pods/e2c9ab46-3143-4472-a606-cd75def78f41/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.428464 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" path="/var/lib/kubelet/pods/fba581c3-e77a-4db7-ac50-bdb17291b2c7/volumes" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519639 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519690 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519721 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519741 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519759 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519788 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519836 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519863 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.519961 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.520366 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.520389 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") pod \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\" (UID: \"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.525591 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526001 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526062 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz" (OuterVolumeSpecName: "kube-api-access-vfqqz") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "kube-api-access-vfqqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526519 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526903 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info" (OuterVolumeSpecName: "pod-info") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.526929 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.530198 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.548536 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (OuterVolumeSpecName: "persistence") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.558251 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data" (OuterVolumeSpecName: "config-data") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.582573 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf" (OuterVolumeSpecName: "server-conf") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.598289 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622342 5136 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622393 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfqqz\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-kube-api-access-vfqqz\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622406 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622431 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622443 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622452 5136 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-server-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622461 5136 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-pod-info\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622469 5136 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-plugins-conf\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622478 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.622487 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.643214 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" (UID: "804d1bff-7c63-45a1-bf1a-68f3eedb6ac7"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.643448 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.643562 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0") on node "crc" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723619 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723702 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723752 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723791 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723835 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723865 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.723921 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.724069 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") pod \"6492170d-c425-4bc1-8f26-b002ade2a30a\" (UID: \"6492170d-c425-4bc1-8f26-b002ade2a30a\") " Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.724432 5136 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.724457 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b4c5fac1-54e4-431c-83e7-406d6546bdd0\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.727803 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd" (OuterVolumeSpecName: "kube-api-access-jt2qd") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "kube-api-access-jt2qd". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.728396 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.730166 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts" (OuterVolumeSpecName: "scripts") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.730292 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.749226 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data" (OuterVolumeSpecName: "config-data") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.772981 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.789917 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.808123 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "6492170d-c425-4bc1-8f26-b002ade2a30a" (UID: "6492170d-c425-4bc1-8f26-b002ade2a30a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825616 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt2qd\" (UniqueName: \"kubernetes.io/projected/6492170d-c425-4bc1-8f26-b002ade2a30a-kube-api-access-jt2qd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825652 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825661 5136 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-fernet-keys\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825672 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825682 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825689 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825698 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:00 crc kubenswrapper[5136]: I0320 09:03:00.825705 5136 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6492170d-c425-4bc1-8f26-b002ade2a30a-credential-keys\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.182162 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"804d1bff-7c63-45a1-bf1a-68f3eedb6ac7","Type":"ContainerDied","Data":"2628ea0efb2aa853724bc88efed7cc193022127cf5eeb6dafde84a174a83f933"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.182216 5136 scope.go:117] "RemoveContainer" containerID="55989c472a0077640a315a8d0db45eb57abfdba8bbd4415c0bb59c9d232cd911" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.182227 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186123 5136 generic.go:334] "Generic (PLEG): container finished" podID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" exitCode=0 Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186156 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerDied","Data":"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186188 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-69dd969bf5-bw8cr" event={"ID":"6492170d-c425-4bc1-8f26-b002ade2a30a","Type":"ContainerDied","Data":"8f7c1605d17f9d449c5a1d9a15decff286543015c063129a09c0f97fced38720"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.186202 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-69dd969bf5-bw8cr" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.192917 5136 generic.go:334] "Generic (PLEG): container finished" podID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerID="7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2" exitCode=0 Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.192957 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2"} Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.213621 5136 scope.go:117] "RemoveContainer" containerID="9dce885b2b155ce206b241a4559ff57088997d22aa0ed36e735fad7b8993132d" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.232964 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.245961 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-69dd969bf5-bw8cr"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.254037 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.259737 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.272602 5136 scope.go:117] "RemoveContainer" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.291923 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.307375 5136 scope.go:117] "RemoveContainer" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" Mar 20 09:03:01 crc kubenswrapper[5136]: E0320 09:03:01.308116 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b\": container with ID starting with 0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b not found: ID does not exist" containerID="0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.308164 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b"} err="failed to get container status \"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b\": rpc error: code = NotFound desc = could not find container \"0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b\": container with ID starting with 0f88ab7221b1cadfaad64dc8257c4ff6e40bf8acf20bf20f68729db269cd8c5b not found: ID does not exist" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334437 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334604 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334743 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334910 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.334941 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") pod \"040731fb-85ee-40ac-9ea2-3627a5f48766\" (UID: \"040731fb-85ee-40ac-9ea2-3627a5f48766\") " Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.342792 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792" (OuterVolumeSpecName: "kube-api-access-hf792") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "kube-api-access-hf792". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.343322 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts" (OuterVolumeSpecName: "scripts") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.373346 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.377385 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.405447 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.411306 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data" (OuterVolumeSpecName: "config-data") pod "040731fb-85ee-40ac-9ea2-3627a5f48766" (UID: "040731fb-85ee-40ac-9ea2-3627a5f48766"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.436677 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.436858 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.436951 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf792\" (UniqueName: \"kubernetes.io/projected/040731fb-85ee-40ac-9ea2-3627a5f48766-kube-api-access-hf792\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.437011 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.437074 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:01 crc kubenswrapper[5136]: I0320 09:03:01.437134 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/040731fb-85ee-40ac-9ea2-3627a5f48766-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.207145 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"040731fb-85ee-40ac-9ea2-3627a5f48766","Type":"ContainerDied","Data":"c5f3a5b62a724af9b3292dfbea60cc84cb5ca65e111a8f8018f79664063a08d4"} Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.207199 5136 scope.go:117] "RemoveContainer" containerID="7e50b645c14d1546ec6ea5f4cc398a09d244fd42ce33f42ece0b4ffe6904f5e2" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.207239 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.263302 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.266381 5136 scope.go:117] "RemoveContainer" containerID="dcd83e13ab91d1b3d212755c407d51e55f888a96ac55ba1a109bfc09166fa35d" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.274173 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.290390 5136 scope.go:117] "RemoveContainer" containerID="2ead91f10403f4d804be86964d57e08ade2602b5155572d57113b707313fe0a4" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.313345 5136 scope.go:117] "RemoveContainer" containerID="7cef857842ff9d2b9ec6fba6fced2a4a47da1ca6826d17831d4379411662258d" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.406179 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" path="/var/lib/kubelet/pods/040731fb-85ee-40ac-9ea2-3627a5f48766/volumes" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.407144 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" path="/var/lib/kubelet/pods/6492170d-c425-4bc1-8f26-b002ade2a30a/volumes" Mar 20 09:03:02 crc kubenswrapper[5136]: I0320 09:03:02.407770 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" path="/var/lib/kubelet/pods/804d1bff-7c63-45a1-bf1a-68f3eedb6ac7/volumes" Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.375788 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.376319 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mariadb-copy-data" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" containerID="cri-o://a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1" gracePeriod=30 Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.660190 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 09:03:03 crc kubenswrapper[5136]: I0320 09:03:03.660422 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-copy-data" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" containerID="cri-o://b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" gracePeriod=30 Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.239525 5136 generic.go:334] "Generic (PLEG): container finished" podID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerID="d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9" exitCode=0 Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.239581 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9"} Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.315068 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387177 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387242 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387272 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.387414 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388065 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388447 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388513 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388559 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") pod \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\" (UID: \"7dbff142-083b-40b7-a0d7-3f17fa9810e3\") " Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.388921 5136 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-log-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.389149 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.405242 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts" (OuterVolumeSpecName: "scripts") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.437168 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp" (OuterVolumeSpecName: "kube-api-access-lj2tp") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "kube-api-access-lj2tp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.486965 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489781 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489806 5136 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489831 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj2tp\" (UniqueName: \"kubernetes.io/projected/7dbff142-083b-40b7-a0d7-3f17fa9810e3-kube-api-access-lj2tp\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.489842 5136 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7dbff142-083b-40b7-a0d7-3f17fa9810e3-run-httpd\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.534982 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.542017 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.575372 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data" (OuterVolumeSpecName: "config-data") pod "7dbff142-083b-40b7-a0d7-3f17fa9810e3" (UID: "7dbff142-083b-40b7-a0d7-3f17fa9810e3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.591574 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.591607 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:04 crc kubenswrapper[5136]: I0320 09:03:04.591616 5136 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/7dbff142-083b-40b7-a0d7-3f17fa9810e3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.251287 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7dbff142-083b-40b7-a0d7-3f17fa9810e3","Type":"ContainerDied","Data":"5dafeee7b076cbe9bc559d7bf9a4dfd78115dffe5bc845019a4034a5c5f12b09"} Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.251571 5136 scope.go:117] "RemoveContainer" containerID="65f0fcd421f6ec548cf9de08170a35a1209f40b76c2fe57dae5b8d4eb78f76fb" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.251366 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.282766 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.294602 5136 scope.go:117] "RemoveContainer" containerID="22214e3addd0a5c3b338ef171790692102833676b69426afd997304cb1243d2d" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.296782 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.313867 5136 scope.go:117] "RemoveContainer" containerID="d9e0d70b1ab5d2268043ec21cc179228d03e79f0c594fbabbe78f8b02d15cad9" Mar 20 09:03:05 crc kubenswrapper[5136]: I0320 09:03:05.331058 5136 scope.go:117] "RemoveContainer" containerID="bdb39aa61401fd83441079957dca9d830d4f38dae3c0e327bcfc878794649036" Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.590120 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.592099 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.593226 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:05 crc kubenswrapper[5136]: E0320 09:03:05.593293 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:06 crc kubenswrapper[5136]: I0320 09:03:06.414386 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" path="/var/lib/kubelet/pods/7dbff142-083b-40b7-a0d7-3f17fa9810e3/volumes" Mar 20 09:03:06 crc kubenswrapper[5136]: I0320 09:03:06.778941 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 09:03:08 crc kubenswrapper[5136]: E0320 09:03:08.016483 5136 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod305f3f22_2f38_44c5_8e63_1f028edce331.slice/crio-conmon-293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a.scope\": RecentStats: unable to find data in memory cache]" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.278076 5136 generic.go:334] "Generic (PLEG): container finished" podID="305f3f22-2f38-44c5-8e63-1f028edce331" containerID="293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a" exitCode=0 Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.278141 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerDied","Data":"293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a"} Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.339145 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444285 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444458 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444534 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444561 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444598 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444623 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.444689 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") pod \"305f3f22-2f38-44c5-8e63-1f028edce331\" (UID: \"305f3f22-2f38-44c5-8e63-1f028edce331\") " Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.457015 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.457040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs" (OuterVolumeSpecName: "kube-api-access-rn4qs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "kube-api-access-rn4qs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.481974 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.483740 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.486426 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config" (OuterVolumeSpecName: "config") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.488126 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.503115 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "305f3f22-2f38-44c5-8e63-1f028edce331" (UID: "305f3f22-2f38-44c5-8e63-1f028edce331"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553765 5136 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-httpd-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553811 5136 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553849 5136 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-public-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553858 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553868 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn4qs\" (UniqueName: \"kubernetes.io/projected/305f3f22-2f38-44c5-8e63-1f028edce331-kube-api-access-rn4qs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553877 5136 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:08 crc kubenswrapper[5136]: I0320 09:03:08.553885 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/305f3f22-2f38-44c5-8e63-1f028edce331-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.288864 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b494fbb57-cd7nw" event={"ID":"305f3f22-2f38-44c5-8e63-1f028edce331","Type":"ContainerDied","Data":"98850495a9f03337c375a82dd9c14c9acf7e4cb2584c596cc8791bfade8a3bf0"} Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.288907 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b494fbb57-cd7nw" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.288934 5136 scope.go:117] "RemoveContainer" containerID="b54ae1c896c24440630a7756d526255f3def96dbed5cb096fc4d77997e706367" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.324811 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.326594 5136 scope.go:117] "RemoveContainer" containerID="293a5f06d0837fa0b5aa6b166b5c1bc91790dda04631163e44e07b267901142a" Mar 20 09:03:09 crc kubenswrapper[5136]: I0320 09:03:09.329973 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b494fbb57-cd7nw"] Mar 20 09:03:10 crc kubenswrapper[5136]: I0320 09:03:10.429260 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" path="/var/lib/kubelet/pods/305f3f22-2f38-44c5-8e63-1f028edce331/volumes" Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.590646 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.593806 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.595541 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:15 crc kubenswrapper[5136]: E0320 09:03:15.595667 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:15 crc kubenswrapper[5136]: I0320 09:03:15.822003 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:03:15 crc kubenswrapper[5136]: I0320 09:03:15.822062 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:03:16 crc kubenswrapper[5136]: I0320 09:03:16.779966 5136 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-55ffc4694-d4d2v" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" probeResult="failure" output="Get \"https://10.217.1.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.1.156:8443: connect: connection refused" Mar 20 09:03:16 crc kubenswrapper[5136]: I0320 09:03:16.780133 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 09:03:21 crc kubenswrapper[5136]: I0320 09:03:21.898520 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059320 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059415 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059493 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059533 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059573 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059615 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.059684 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") pod \"1da401a4-384d-4911-bf25-0aa4c544fd0d\" (UID: \"1da401a4-384d-4911-bf25-0aa4c544fd0d\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.060642 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs" (OuterVolumeSpecName: "logs") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.066619 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h" (OuterVolumeSpecName: "kube-api-access-slc4h") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "kube-api-access-slc4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.067490 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.091132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.091249 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data" (OuterVolumeSpecName: "config-data") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.101092 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts" (OuterVolumeSpecName: "scripts") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.107495 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "1da401a4-384d-4911-bf25-0aa4c544fd0d" (UID: "1da401a4-384d-4911-bf25-0aa4c544fd0d"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161626 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161656 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161666 5136 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1da401a4-384d-4911-bf25-0aa4c544fd0d-logs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161675 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161685 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1da401a4-384d-4911-bf25-0aa4c544fd0d-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161693 5136 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/1da401a4-384d-4911-bf25-0aa4c544fd0d-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.161702 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-slc4h\" (UniqueName: \"kubernetes.io/projected/1da401a4-384d-4911-bf25-0aa4c544fd0d-kube-api-access-slc4h\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.365497 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.409254 5136 generic.go:334] "Generic (PLEG): container finished" podID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" exitCode=137 Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.409352 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-55ffc4694-d4d2v" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.411831 5136 generic.go:334] "Generic (PLEG): container finished" podID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" exitCode=137 Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.411910 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412548 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerDied","Data":"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-55ffc4694-d4d2v" event={"ID":"1da401a4-384d-4911-bf25-0aa4c544fd0d","Type":"ContainerDied","Data":"a94c046fe10c34e65503024040ef0cdbc5574ea11bd41c94fb47af937848986b"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412800 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerDied","Data":"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.412922 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"7be786a7-1dee-4cfb-bada-4883a9326c71","Type":"ContainerDied","Data":"902c62cf1140494c6f31a74b7c931afaffaa08d6e8a6315048461c8df99fb197"} Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.413020 5136 scope.go:117] "RemoveContainer" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.446655 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.452641 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-55ffc4694-d4d2v"] Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465405 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465458 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465503 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465533 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465636 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465707 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") pod \"7be786a7-1dee-4cfb-bada-4883a9326c71\" (UID: \"7be786a7-1dee-4cfb-bada-4883a9326c71\") " Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.465765 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.466084 5136 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7be786a7-1dee-4cfb-bada-4883a9326c71-etc-machine-id\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.470030 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.470064 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n" (OuterVolumeSpecName: "kube-api-access-5mw7n") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "kube-api-access-5mw7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.470086 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts" (OuterVolumeSpecName: "scripts") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.499612 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.545797 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data" (OuterVolumeSpecName: "config-data") pod "7be786a7-1dee-4cfb-bada-4883a9326c71" (UID: "7be786a7-1dee-4cfb-bada-4883a9326c71"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568053 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568089 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568104 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568115 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7be786a7-1dee-4cfb-bada-4883a9326c71-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.568128 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mw7n\" (UniqueName: \"kubernetes.io/projected/7be786a7-1dee-4cfb-bada-4883a9326c71-kube-api-access-5mw7n\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.576529 5136 scope.go:117] "RemoveContainer" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.632210 5136 scope.go:117] "RemoveContainer" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.632922 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d\": container with ID starting with 5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d not found: ID does not exist" containerID="5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633009 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d"} err="failed to get container status \"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d\": rpc error: code = NotFound desc = could not find container \"5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d\": container with ID starting with 5d097a32fad24f683773582349a5bd70dc46ca5a6f4845399b1376ac61baa60d not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633061 5136 scope.go:117] "RemoveContainer" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.633510 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa\": container with ID starting with 635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa not found: ID does not exist" containerID="635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633613 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa"} err="failed to get container status \"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa\": rpc error: code = NotFound desc = could not find container \"635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa\": container with ID starting with 635dca94b151651e4c36b823760043aacfc192a7da55106aa1e62d819abeafaa not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.633659 5136 scope.go:117] "RemoveContainer" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.656589 5136 scope.go:117] "RemoveContainer" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672275 5136 scope.go:117] "RemoveContainer" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.672563 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a\": container with ID starting with b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a not found: ID does not exist" containerID="b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672591 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a"} err="failed to get container status \"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a\": rpc error: code = NotFound desc = could not find container \"b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a\": container with ID starting with b04544c3c8633753614a0867721f5e1c8cbd9623a90831589d5ac39548c77e5a not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672609 5136 scope.go:117] "RemoveContainer" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" Mar 20 09:03:22 crc kubenswrapper[5136]: E0320 09:03:22.672913 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77\": container with ID starting with 49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77 not found: ID does not exist" containerID="49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.672938 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77"} err="failed to get container status \"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77\": rpc error: code = NotFound desc = could not find container \"49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77\": container with ID starting with 49d2fddd75a8e57b3d4192156f68050a117173a4bf5a979458cf0d3988bdad77 not found: ID does not exist" Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.749093 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 09:03:22 crc kubenswrapper[5136]: I0320 09:03:22.757676 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Mar 20 09:03:24 crc kubenswrapper[5136]: I0320 09:03:24.417731 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" path="/var/lib/kubelet/pods/1da401a4-384d-4911-bf25-0aa4c544fd0d/volumes" Mar 20 09:03:24 crc kubenswrapper[5136]: I0320 09:03:24.419726 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" path="/var/lib/kubelet/pods/7be786a7-1dee-4cfb-bada-4883a9326c71/volumes" Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.590605 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.591785 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.593463 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:25 crc kubenswrapper[5136]: E0320 09:03:25.593508 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.548587 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerDied","Data":"a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1"} Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.548739 5136 generic.go:334] "Generic (PLEG): container finished" podID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerID="a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1" exitCode=137 Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.848868 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.978960 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mariadb-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") pod \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.979059 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") pod \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\" (UID: \"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226\") " Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.990139 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw" (OuterVolumeSpecName: "kube-api-access-cc9kw") pod "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" (UID: "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226"). InnerVolumeSpecName "kube-api-access-cc9kw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:33 crc kubenswrapper[5136]: I0320 09:03:33.992922 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf" (OuterVolumeSpecName: "mariadb-data") pod "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" (UID: "30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226"). InnerVolumeSpecName "pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.015214 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.080373 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cc9kw\" (UniqueName: \"kubernetes.io/projected/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226-kube-api-access-cc9kw\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.080433 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") on node \"crc\" " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.095578 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.095794 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf") on node "crc" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.181716 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") pod \"c068d291-989b-4247-8cee-0596033c8ce5\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.181849 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") pod \"c068d291-989b-4247-8cee-0596033c8ce5\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.182852 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-data\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") pod \"c068d291-989b-4247-8cee-0596033c8ce5\" (UID: \"c068d291-989b-4247-8cee-0596033c8ce5\") " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.183328 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4c9f3f1b-9380-4c8e-a123-ee9a27c7d6bf\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.185025 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j" (OuterVolumeSpecName: "kube-api-access-mkx6j") pod "c068d291-989b-4247-8cee-0596033c8ce5" (UID: "c068d291-989b-4247-8cee-0596033c8ce5"). InnerVolumeSpecName "kube-api-access-mkx6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.189071 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert" (OuterVolumeSpecName: "ovn-data-cert") pod "c068d291-989b-4247-8cee-0596033c8ce5" (UID: "c068d291-989b-4247-8cee-0596033c8ce5"). InnerVolumeSpecName "ovn-data-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.202486 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72" (OuterVolumeSpecName: "ovn-data") pod "c068d291-989b-4247-8cee-0596033c8ce5" (UID: "c068d291-989b-4247-8cee-0596033c8ce5"). InnerVolumeSpecName "pvc-f678964a-3590-4064-b82f-274887925e72". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.284789 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkx6j\" (UniqueName: \"kubernetes.io/projected/c068d291-989b-4247-8cee-0596033c8ce5-kube-api-access-mkx6j\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.285102 5136 reconciler_common.go:293] "Volume detached for volume \"ovn-data-cert\" (UniqueName: \"kubernetes.io/secret/c068d291-989b-4247-8cee-0596033c8ce5-ovn-data-cert\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.285304 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") on node \"crc\" " Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.303231 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.303505 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f678964a-3590-4064-b82f-274887925e72" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72") on node "crc" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.388360 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-f678964a-3590-4064-b82f-274887925e72\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f678964a-3590-4064-b82f-274887925e72\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572540 5136 generic.go:334] "Generic (PLEG): container finished" podID="c068d291-989b-4247-8cee-0596033c8ce5" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" exitCode=137 Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572578 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-copy-data" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572643 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerDied","Data":"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386"} Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572674 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-copy-data" event={"ID":"c068d291-989b-4247-8cee-0596033c8ce5","Type":"ContainerDied","Data":"e581eb3896caa8dce4da5d70ae2539c97df467c09420153c45b9ba77109b2e63"} Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.572713 5136 scope.go:117] "RemoveContainer" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.576533 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mariadb-copy-data" event={"ID":"30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226","Type":"ContainerDied","Data":"2c2faf3df1acecb9c43fb4e3dfa1b1bce7305d443462043dbca7203ee15e6fb8"} Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.576605 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mariadb-copy-data" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.601128 5136 scope.go:117] "RemoveContainer" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.603019 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 09:03:34 crc kubenswrapper[5136]: E0320 09:03:34.604567 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386\": container with ID starting with b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386 not found: ID does not exist" containerID="b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.604632 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386"} err="failed to get container status \"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386\": rpc error: code = NotFound desc = could not find container \"b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386\": container with ID starting with b41467e56da65e5424050a8c3c0bd7b4c18471ad0ef23aa5de432ccb8a7ee386 not found: ID does not exist" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.604655 5136 scope.go:117] "RemoveContainer" containerID="a92d1246191abb0ecac747ee2b70c76f1c8719ff40ea6ea858ab04c5eaa88bb1" Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.613980 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mariadb-copy-data"] Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.620585 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 09:03:34 crc kubenswrapper[5136]: I0320 09:03:34.626414 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-copy-data"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.589624 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.591509 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.592848 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:35 crc kubenswrapper[5136]: E0320 09:03:35.592877 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:36 crc kubenswrapper[5136]: I0320 09:03:36.406842 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" path="/var/lib/kubelet/pods/30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226/volumes" Mar 20 09:03:36 crc kubenswrapper[5136]: I0320 09:03:36.407396 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c068d291-989b-4247-8cee-0596033c8ce5" path="/var/lib/kubelet/pods/c068d291-989b-4247-8cee-0596033c8ce5/volumes" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.516266 5136 scope.go:117] "RemoveContainer" containerID="be01a18339108a324f38d8991f30133c5afcdbbf8536a6fc62d20def93a4fe70" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.588919 5136 scope.go:117] "RemoveContainer" containerID="036a56dc8be2b1448ecd4eaee7ae6cfc9fce54b35893d14784a5e2a194d245a2" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.619712 5136 scope.go:117] "RemoveContainer" containerID="f6175692bafced85aff7b1e4e0d62331fe62665eef24726a05ac9691debbacde" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.655970 5136 scope.go:117] "RemoveContainer" containerID="bd3d02ee4935523ab4eb4492588717b04d2271f1f22be17fbab8ebb01a7e4c49" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.677735 5136 scope.go:117] "RemoveContainer" containerID="a2cd799ad38f20f3a20df188a90ca9d10f639dafb3f002a582a1fe8b8331c153" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.709902 5136 scope.go:117] "RemoveContainer" containerID="249161d201c86c824c827618ff63c208e5d4f7836f10cac1b975be51340fe2bc" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.750215 5136 scope.go:117] "RemoveContainer" containerID="5df9d903ae57ec8baad2fe6c51be0e13f0c8a558bfc5471ea6ef07feb8e164f7" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.778253 5136 scope.go:117] "RemoveContainer" containerID="ed9ea6d3f8369f00e748cdbd6f737c4f4b838eb8db7325e29aaf558dc66f2d6f" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.832690 5136 scope.go:117] "RemoveContainer" containerID="bbdf8dabf0d2951b09ae3f63cdd6eda3f6af581fbac68d093607e16820b73e60" Mar 20 09:03:42 crc kubenswrapper[5136]: I0320 09:03:42.871388 5136 scope.go:117] "RemoveContainer" containerID="0596189127fdfe0bb4f8c43c9a281f3d0d01a460eb398984e9cddcf692a4beaa" Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.590269 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.591948 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.593280 5136 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Mar 20 09:03:45 crc kubenswrapper[5136]: E0320 09:03:45.593310 5136 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-7659754fcd-klwkv" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.822430 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.822746 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.822948 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.823781 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 09:03:45 crc kubenswrapper[5136]: I0320 09:03:45.823980 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b" gracePeriod=600 Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695567 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b" exitCode=0 Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695608 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b"} Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695867 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerStarted","Data":"85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51"} Mar 20 09:03:46 crc kubenswrapper[5136]: I0320 09:03:46.695887 5136 scope.go:117] "RemoveContainer" containerID="89ca86a2a61df6a14133bf7e6e12d3fe4a43dc4a86ec9ee1b60f9f4fab9a78a2" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.146441 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331350 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331403 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331453 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.331511 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") pod \"53ac16e5-846e-40c1-a361-0815d231345a\" (UID: \"53ac16e5-846e-40c1-a361-0815d231345a\") " Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.341072 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.341489 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs" (OuterVolumeSpecName: "kube-api-access-vhkvs") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "kube-api-access-vhkvs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.355340 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.382220 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data" (OuterVolumeSpecName: "config-data") pod "53ac16e5-846e-40c1-a361-0815d231345a" (UID: "53ac16e5-846e-40c1-a361-0815d231345a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435008 5136 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435066 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435089 5136 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/53ac16e5-846e-40c1-a361-0815d231345a-config-data-custom\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.435110 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhkvs\" (UniqueName: \"kubernetes.io/projected/53ac16e5-846e-40c1-a361-0815d231345a-kube-api-access-vhkvs\") on node \"crc\" DevicePath \"\"" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778517 5136 generic.go:334] "Generic (PLEG): container finished" podID="53ac16e5-846e-40c1-a361-0815d231345a" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" exitCode=137 Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778582 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-7659754fcd-klwkv" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778609 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerDied","Data":"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6"} Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778656 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-7659754fcd-klwkv" event={"ID":"53ac16e5-846e-40c1-a361-0815d231345a","Type":"ContainerDied","Data":"d50b7d1e3cf945251d9601fa11e23c7c876806ca081f20f39fb0f6c33187004b"} Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.778681 5136 scope.go:117] "RemoveContainer" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.809315 5136 scope.go:117] "RemoveContainer" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" Mar 20 09:03:55 crc kubenswrapper[5136]: E0320 09:03:55.810065 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6\": container with ID starting with 72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6 not found: ID does not exist" containerID="72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.810124 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6"} err="failed to get container status \"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6\": rpc error: code = NotFound desc = could not find container \"72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6\": container with ID starting with 72ce6557984ae637479d694bffc2f72c209dadcf2dd6e3a87175efef3fb9d3d6 not found: ID does not exist" Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.839732 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 09:03:55 crc kubenswrapper[5136]: I0320 09:03:55.879558 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-7659754fcd-klwkv"] Mar 20 09:03:56 crc kubenswrapper[5136]: I0320 09:03:56.407896 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53ac16e5-846e-40c1-a361-0815d231345a" path="/var/lib/kubelet/pods/53ac16e5-846e-40c1-a361-0815d231345a/volumes" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160436 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160788 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160803 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160831 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160840 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160853 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160862 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160880 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160887 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160901 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160908 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160918 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160926 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="setup-container" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160936 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160944 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160957 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160963 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160972 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160978 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.160990 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.160997 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161008 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161014 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161024 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161032 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161042 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161049 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161058 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161066 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161077 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161083 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161095 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161102 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161111 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161117 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161129 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161136 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161147 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161154 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161162 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161169 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161182 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161189 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161203 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161210 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161221 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161228 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161237 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161244 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161254 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161261 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161272 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161279 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161289 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161296 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161305 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161312 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161325 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161332 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161344 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161350 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161362 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="mysql-bootstrap" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161369 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="mysql-bootstrap" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161385 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161392 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" Mar 20 09:04:00 crc kubenswrapper[5136]: E0320 09:04:00.161404 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161412 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161565 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6492170d-c425-4bc1-8f26-b002ade2a30a" containerName="keystone-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161583 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="openstack-network-exporter" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161597 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="804d1bff-7c63-45a1-bf1a-68f3eedb6ac7" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161611 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161625 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161639 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="27e5e3ba-6aa4-45f7-bcf0-3c1063b9541b" containerName="kube-state-metrics" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161651 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2c9ab46-3143-4472-a606-cd75def78f41" containerName="rabbitmq" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161661 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161672 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-notification-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161685 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fba581c3-e77a-4db7-ac50-bdb17291b2c7" containerName="nova-metadata-metadata" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161697 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="11508a60-8214-4811-898f-9542eee208d5" containerName="nova-cell0-conductor-conductor" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161712 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c068d291-989b-4247-8cee-0596033c8ce5" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161721 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-listener" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161732 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="d397a968-433e-4de9-8ed7-d0247aa5e775" containerName="heat-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161744 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="ceilometer-central-agent" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161757 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-notifier" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161772 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cf0c76a-c284-44b5-9aee-293de926cb90" containerName="galera" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161784 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="305f3f22-2f38-44c5-8e63-1f028edce331" containerName="neutron-api" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161795 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="25dc915a-6dbf-4622-bd14-1b372cfe9acc" containerName="heat-cfnapi" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161808 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon-log" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161837 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="proxy-httpd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161847 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fcd7752-be4a-45af-b12d-f4ee6275b3b3" containerName="memcached" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161854 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="040731fb-85ee-40ac-9ea2-3627a5f48766" containerName="aodh-evaluator" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161865 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="53ac16e5-846e-40c1-a361-0815d231345a" containerName="heat-engine" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161875 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="22659681-bc2b-4056-81d6-96b046e45712" containerName="ovn-northd" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161883 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dbff142-083b-40b7-a0d7-3f17fa9810e3" containerName="sg-core" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161891 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="30ce7286-48d8-4d5f-9cb7-a3fe3c1a0226" containerName="adoption" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161902 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="1da401a4-384d-4911-bf25-0aa4c544fd0d" containerName="horizon" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161913 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="cinder-scheduler" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.161923 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be786a7-1dee-4cfb-bada-4883a9326c71" containerName="probe" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.162526 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.169756 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.170037 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.170653 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.188594 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.212156 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"auto-csr-approver-29566624-n9gpj\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.313741 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"auto-csr-approver-29566624-n9gpj\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.338283 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"auto-csr-approver-29566624-n9gpj\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.489267 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:00 crc kubenswrapper[5136]: I0320 09:04:00.916798 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:04:01 crc kubenswrapper[5136]: I0320 09:04:01.834419 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" event={"ID":"2c69bbe6-8752-4d39-b2e4-2eab9134dbda","Type":"ContainerStarted","Data":"8abd98fdbcc90c77025204d2045d2c3addc2296a4b751024a75339d7473623f5"} Mar 20 09:04:02 crc kubenswrapper[5136]: I0320 09:04:02.846865 5136 generic.go:334] "Generic (PLEG): container finished" podID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerID="35e33276bd939043cf0f403b9a2e455c0ebe9937a874e7a190c199a2c2c31266" exitCode=0 Mar 20 09:04:02 crc kubenswrapper[5136]: I0320 09:04:02.846956 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" event={"ID":"2c69bbe6-8752-4d39-b2e4-2eab9134dbda","Type":"ContainerDied","Data":"35e33276bd939043cf0f403b9a2e455c0ebe9937a874e7a190c199a2c2c31266"} Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.246053 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.263977 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") pod \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\" (UID: \"2c69bbe6-8752-4d39-b2e4-2eab9134dbda\") " Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.270128 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc" (OuterVolumeSpecName: "kube-api-access-l6mlc") pod "2c69bbe6-8752-4d39-b2e4-2eab9134dbda" (UID: "2c69bbe6-8752-4d39-b2e4-2eab9134dbda"). InnerVolumeSpecName "kube-api-access-l6mlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.365575 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6mlc\" (UniqueName: \"kubernetes.io/projected/2c69bbe6-8752-4d39-b2e4-2eab9134dbda-kube-api-access-l6mlc\") on node \"crc\" DevicePath \"\"" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.861898 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" event={"ID":"2c69bbe6-8752-4d39-b2e4-2eab9134dbda","Type":"ContainerDied","Data":"8abd98fdbcc90c77025204d2045d2c3addc2296a4b751024a75339d7473623f5"} Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.861978 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8abd98fdbcc90c77025204d2045d2c3addc2296a4b751024a75339d7473623f5" Mar 20 09:04:04 crc kubenswrapper[5136]: I0320 09:04:04.861920 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566624-n9gpj" Mar 20 09:04:05 crc kubenswrapper[5136]: I0320 09:04:05.307974 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 09:04:05 crc kubenswrapper[5136]: I0320 09:04:05.315378 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566618-mxdtq"] Mar 20 09:04:06 crc kubenswrapper[5136]: I0320 09:04:06.407063 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b36af1-10a6-412b-a488-892560533fbc" path="/var/lib/kubelet/pods/c9b36af1-10a6-412b-a488-892560533fbc/volumes" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.571485 5136 scope.go:117] "RemoveContainer" containerID="95c081e05a9b5bcdeb5bad36239bef981a1b982b216877bab99876ec036176fc" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.641726 5136 scope.go:117] "RemoveContainer" containerID="a3cfce9ddd036c7cf63f0412122dd66290e1e5696f18bc8cd0c8f3ee086aeaaa" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.659100 5136 scope.go:117] "RemoveContainer" containerID="c48b0b35287dd607ba880a1efbf0d79170312ce43d17777e16e04be5b17bbe8a" Mar 20 09:04:43 crc kubenswrapper[5136]: I0320 09:04:43.681512 5136 scope.go:117] "RemoveContainer" containerID="20f8a9a945087915c09ba9f6c5bb3fad1e06a23db6077bf675c7ee359a2b9ea4" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.396636 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:04:57 crc kubenswrapper[5136]: E0320 09:04:57.397715 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerName="oc" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.397733 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerName="oc" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.397943 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" containerName="oc" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.398942 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.400545 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-ppsnr"/"default-dockercfg-w9w8v" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.401035 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ppsnr"/"openshift-service-ca.crt" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.402052 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-ppsnr"/"kube-root-ca.crt" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.406210 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.474656 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.475068 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.576728 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.577113 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.577684 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.599611 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"must-gather-8lzbv\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.715950 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:04:57 crc kubenswrapper[5136]: I0320 09:04:57.990773 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:04:58 crc kubenswrapper[5136]: I0320 09:04:58.336584 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerStarted","Data":"cf163eeeb83d64d0e119dfe030e24a3a07d52a3c1734c5a472356d78c486daa0"} Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.183052 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-n92lq"] Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.184377 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.271529 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.271592 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.373472 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.373525 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.373672 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.386397 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerStarted","Data":"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307"} Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.386450 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerStarted","Data":"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32"} Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.393096 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"crc-debug-n92lq\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.413552 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" podStartSLOduration=1.949763029 podStartE2EDuration="7.413527618s" podCreationTimestamp="2026-03-20 09:04:57 +0000 UTC" firstStartedPulling="2026-03-20 09:04:57.989969044 +0000 UTC m=+8130.249280195" lastFinishedPulling="2026-03-20 09:05:03.453733633 +0000 UTC m=+8135.713044784" observedRunningTime="2026-03-20 09:05:04.400173255 +0000 UTC m=+8136.659484426" watchObservedRunningTime="2026-03-20 09:05:04.413527618 +0000 UTC m=+8136.672838769" Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.499493 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:04 crc kubenswrapper[5136]: W0320 09:05:04.519955 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca05797c_feba_4bf1_969a_cf8268c5416e.slice/crio-552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62 WatchSource:0}: Error finding container 552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62: Status 404 returned error can't find the container with id 552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62 Mar 20 09:05:04 crc kubenswrapper[5136]: I0320 09:05:04.522474 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:05:05 crc kubenswrapper[5136]: I0320 09:05:05.394085 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" event={"ID":"ca05797c-feba-4bf1-969a-cf8268c5416e","Type":"ContainerStarted","Data":"552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62"} Mar 20 09:05:16 crc kubenswrapper[5136]: I0320 09:05:16.492959 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" event={"ID":"ca05797c-feba-4bf1-969a-cf8268c5416e","Type":"ContainerStarted","Data":"cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe"} Mar 20 09:05:16 crc kubenswrapper[5136]: I0320 09:05:16.507398 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" podStartSLOduration=1.324022177 podStartE2EDuration="12.507381048s" podCreationTimestamp="2026-03-20 09:05:04 +0000 UTC" firstStartedPulling="2026-03-20 09:05:04.522170832 +0000 UTC m=+8136.781481983" lastFinishedPulling="2026-03-20 09:05:15.705529703 +0000 UTC m=+8147.964840854" observedRunningTime="2026-03-20 09:05:16.505772449 +0000 UTC m=+8148.765083600" watchObservedRunningTime="2026-03-20 09:05:16.507381048 +0000 UTC m=+8148.766692199" Mar 20 09:05:31 crc kubenswrapper[5136]: I0320 09:05:31.595092 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerID="cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe" exitCode=0 Mar 20 09:05:31 crc kubenswrapper[5136]: I0320 09:05:31.595173 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" event={"ID":"ca05797c-feba-4bf1-969a-cf8268c5416e","Type":"ContainerDied","Data":"cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe"} Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.684585 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.706873 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-n92lq"] Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.711974 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-n92lq"] Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.787788 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") pod \"ca05797c-feba-4bf1-969a-cf8268c5416e\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.787906 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") pod \"ca05797c-feba-4bf1-969a-cf8268c5416e\" (UID: \"ca05797c-feba-4bf1-969a-cf8268c5416e\") " Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.787953 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host" (OuterVolumeSpecName: "host") pod "ca05797c-feba-4bf1-969a-cf8268c5416e" (UID: "ca05797c-feba-4bf1-969a-cf8268c5416e"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.788185 5136 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/ca05797c-feba-4bf1-969a-cf8268c5416e-host\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.793423 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx" (OuterVolumeSpecName: "kube-api-access-szvfx") pod "ca05797c-feba-4bf1-969a-cf8268c5416e" (UID: "ca05797c-feba-4bf1-969a-cf8268c5416e"). InnerVolumeSpecName "kube-api-access-szvfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:05:32 crc kubenswrapper[5136]: I0320 09:05:32.889428 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szvfx\" (UniqueName: \"kubernetes.io/projected/ca05797c-feba-4bf1-969a-cf8268c5416e-kube-api-access-szvfx\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.610010 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="552a1a2261ff463a0a4ef09c426c0b600d1a196e6a631809f2c20dc0c7273d62" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.610064 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-n92lq" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.891316 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-z5cnm"] Mar 20 09:05:33 crc kubenswrapper[5136]: E0320 09:05:33.892036 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerName="container-00" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.892057 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerName="container-00" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.892264 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" containerName="container-00" Mar 20 09:05:33 crc kubenswrapper[5136]: I0320 09:05:33.892844 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.004561 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.004629 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.106017 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.106062 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.106140 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.124475 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"crc-debug-z5cnm\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.206667 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.408418 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca05797c-feba-4bf1-969a-cf8268c5416e" path="/var/lib/kubelet/pods/ca05797c-feba-4bf1-969a-cf8268c5416e/volumes" Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.619002 5136 generic.go:334] "Generic (PLEG): container finished" podID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerID="17b0f5de8a279485b6aaca68e2060dc79cee0d217678711cb4707b777c79489a" exitCode=1 Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.619061 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" event={"ID":"7d1cda92-e6b4-4955-9a82-884d297123e2","Type":"ContainerDied","Data":"17b0f5de8a279485b6aaca68e2060dc79cee0d217678711cb4707b777c79489a"} Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.619104 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" event={"ID":"7d1cda92-e6b4-4955-9a82-884d297123e2","Type":"ContainerStarted","Data":"ca22aabf1901c5227b218433ec0a9c1ba4336b9293d038311e7defcd4b3d56b4"} Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.669175 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-z5cnm"] Mar 20 09:05:34 crc kubenswrapper[5136]: I0320 09:05:34.677330 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ppsnr/crc-debug-z5cnm"] Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.696548 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829485 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") pod \"7d1cda92-e6b4-4955-9a82-884d297123e2\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829580 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") pod \"7d1cda92-e6b4-4955-9a82-884d297123e2\" (UID: \"7d1cda92-e6b4-4955-9a82-884d297123e2\") " Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829579 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host" (OuterVolumeSpecName: "host") pod "7d1cda92-e6b4-4955-9a82-884d297123e2" (UID: "7d1cda92-e6b4-4955-9a82-884d297123e2"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.829899 5136 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7d1cda92-e6b4-4955-9a82-884d297123e2-host\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.839040 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4" (OuterVolumeSpecName: "kube-api-access-lhjh4") pod "7d1cda92-e6b4-4955-9a82-884d297123e2" (UID: "7d1cda92-e6b4-4955-9a82-884d297123e2"). InnerVolumeSpecName "kube-api-access-lhjh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:05:35 crc kubenswrapper[5136]: I0320 09:05:35.931400 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhjh4\" (UniqueName: \"kubernetes.io/projected/7d1cda92-e6b4-4955-9a82-884d297123e2-kube-api-access-lhjh4\") on node \"crc\" DevicePath \"\"" Mar 20 09:05:36 crc kubenswrapper[5136]: I0320 09:05:36.405780 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" path="/var/lib/kubelet/pods/7d1cda92-e6b4-4955-9a82-884d297123e2/volumes" Mar 20 09:05:36 crc kubenswrapper[5136]: I0320 09:05:36.633501 5136 scope.go:117] "RemoveContainer" containerID="17b0f5de8a279485b6aaca68e2060dc79cee0d217678711cb4707b777c79489a" Mar 20 09:05:36 crc kubenswrapper[5136]: I0320 09:05:36.633552 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/crc-debug-z5cnm" Mar 20 09:05:43 crc kubenswrapper[5136]: I0320 09:05:43.756736 5136 scope.go:117] "RemoveContainer" containerID="bbe438fbefca46d6264b55b57938c854859588f624d630a720b3f84f596f758f" Mar 20 09:05:43 crc kubenswrapper[5136]: I0320 09:05:43.777019 5136 scope.go:117] "RemoveContainer" containerID="456904b2c9b2b9b900c280b5851fc80a26454d886df3722f5e23e7c54d551d62" Mar 20 09:05:43 crc kubenswrapper[5136]: I0320 09:05:43.818455 5136 scope.go:117] "RemoveContainer" containerID="3a4f7e90e6acf592c84321f18597eeea1a3c43546eaea8fecf69996c5f79ba99" Mar 20 09:05:58 crc kubenswrapper[5136]: I0320 09:05:58.489325 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/openstack-network-exporter/0.log" Mar 20 09:05:58 crc kubenswrapper[5136]: I0320 09:05:58.673431 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.136333 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:06:00 crc kubenswrapper[5136]: E0320 09:06:00.137097 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerName="container-00" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.137116 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerName="container-00" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.137277 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d1cda92-e6b4-4955-9a82-884d297123e2" containerName="container-00" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.137786 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.140614 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.141027 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.141431 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.150292 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.159642 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"auto-csr-approver-29566626-v5t2c\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.261200 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"auto-csr-approver-29566626-v5t2c\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.279357 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"auto-csr-approver-29566626-v5t2c\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.459595 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:00 crc kubenswrapper[5136]: I0320 09:06:00.896007 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:06:01 crc kubenswrapper[5136]: I0320 09:06:01.831441 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" event={"ID":"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0","Type":"ContainerStarted","Data":"66cb93f635706732da486bcfe852832e612a1ff6f0de83d8d85a923f2934f09e"} Mar 20 09:06:02 crc kubenswrapper[5136]: I0320 09:06:02.840390 5136 generic.go:334] "Generic (PLEG): container finished" podID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerID="61bbb832cea68f4a2623689f58cca96024659f79b778725dbd4701abef2ee9eb" exitCode=0 Mar 20 09:06:02 crc kubenswrapper[5136]: I0320 09:06:02.840445 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" event={"ID":"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0","Type":"ContainerDied","Data":"61bbb832cea68f4a2623689f58cca96024659f79b778725dbd4701abef2ee9eb"} Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.109353 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.218693 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") pod \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\" (UID: \"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0\") " Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.223132 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6" (OuterVolumeSpecName: "kube-api-access-h7ng6") pod "c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" (UID: "c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0"). InnerVolumeSpecName "kube-api-access-h7ng6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.320268 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7ng6\" (UniqueName: \"kubernetes.io/projected/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0-kube-api-access-h7ng6\") on node \"crc\" DevicePath \"\"" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.856703 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" event={"ID":"c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0","Type":"ContainerDied","Data":"66cb93f635706732da486bcfe852832e612a1ff6f0de83d8d85a923f2934f09e"} Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.857287 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66cb93f635706732da486bcfe852832e612a1ff6f0de83d8d85a923f2934f09e" Mar 20 09:06:04 crc kubenswrapper[5136]: I0320 09:06:04.856764 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566626-v5t2c" Mar 20 09:06:05 crc kubenswrapper[5136]: I0320 09:06:05.179401 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:06:05 crc kubenswrapper[5136]: I0320 09:06:05.186346 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566620-sh7c8"] Mar 20 09:06:06 crc kubenswrapper[5136]: I0320 09:06:06.407729 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91" path="/var/lib/kubelet/pods/fdfe0de3-e7a9-4d27-a0b4-a777a69c8c91/volumes" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.109008 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/util/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.338337 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/util/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.348594 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/pull/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.348661 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/pull/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.529261 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/util/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.577271 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/pull/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.614255 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7c80869988bfa7821a7e3d4d9e7801b12993e99d05df1815488a38514cqgtfz_6bd7eb90-9eb9-40c7-adba-e6315ef6aaa7/extract/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.760751 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-59bc569d95-5lz5s_86f2c200-3fc8-4ff8-abbd-4e9196951c84/manager/0.log" Mar 20 09:06:13 crc kubenswrapper[5136]: I0320 09:06:13.949626 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-588d4d986b-nzs5m_0454e048-0e5f-454d-a341-627512f745b9/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.208804 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-79df6bcc97-4zc57_d9bea0a5-4e0c-4eec-8c57-465238459ec5/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.467908 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-67dd5f86f5-j7rd5_98ee6d09-7d19-49ff-af63-3f24c4bbf6de/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.670309 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-8464cc45fb-jqkmw_ce8f650c-1729-4d5d-ae70-6cefed6ebe33/manager/0.log" Mar 20 09:06:14 crc kubenswrapper[5136]: I0320 09:06:14.932574 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6f787dddc9-cvwqk_8035ac49-bf5e-4c7a-801a-2e0a9acdbec8/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.351935 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-768b96df4c-9vwxq_86ae10c6-6dff-4cac-a399-e03bd4de7134/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.522321 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-7b9c774f96-rpqlj_fad403b0-ff16-4bfe-a0e3-8f0da431260b/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.577580 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-55f864c847-wz6kw_0688d3df-a125-4d57-9699-a87d92b140fa/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.821474 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.821526 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.855893 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-767865f676-8g592_2f2fc86c-b42c-4fd9-94e6-817ed073035d/manager/0.log" Mar 20 09:06:15 crc kubenswrapper[5136]: I0320 09:06:15.900063 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67ccfc9778-w497x_84fa1009-af1d-42a7-ae4d-fe4fd7cbb6e6/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.297169 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5b9f45d989-sshvb_e85f51ac-f1e1-4299-91a6-9b27dcc50967/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.362619 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-5d488d59fb-rdkrz_9b7da04b-f73c-4838-978d-34e4665f3963/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.663186 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d58dc466-g62fh_95dfc6ea-897c-4133-ab1e-cefc81ab0623/manager/0.log" Mar 20 09:06:16 crc kubenswrapper[5136]: I0320 09:06:16.676001 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-74c4796899556xf_10cd2a26-beca-4a3b-a791-83cc8cc451ab/manager/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.063627 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-b85c4d696-xv6qc_eb51f1ec-5289-4291-8334-0149c355adac/operator/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.172987 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-w8k22_4c933e5d-73ac-4820-a31c-e1d5cc5bcae0/registry-server/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.445421 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-884679f54-pdmtp_67cd41a3-e91f-4d51-b79a-61d697bbf646/manager/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.519380 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5784578c99-58pk7_527edb93-1d3a-45f7-a7c9-f9e28fb6f713/manager/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.729886 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-vlngd_3dcb58f9-ad42-41ad-af27-2ca462257e77/operator/0.log" Mar 20 09:06:17 crc kubenswrapper[5136]: I0320 09:06:17.805845 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-c674c5965-jmsnc_8129ebe9-8537-403e-9c32-835f54b5d878/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.299745 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5c5cb9c4d7-v4npm_547cee69-3d64-49aa-8e95-c19be2bb3089/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.487057 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-d6b694c5-qwtfr_489b4c0d-9288-4e00-84ac-23fb05767840/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.644356 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6c4d75f7f9-xp6jw_f50bceb5-4fe7-4eba-a9a2-e40f6c89583a/manager/0.log" Mar 20 09:06:18 crc kubenswrapper[5136]: I0320 09:06:18.703075 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-86bd8996f6-5rlp5_9d927c7f-6a4c-4b9b-be0e-12f6a5183cd8/manager/0.log" Mar 20 09:06:38 crc kubenswrapper[5136]: I0320 09:06:38.312704 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-j6ffq_dd410106-c7b7-4706-9b99-38e3597ee713/control-plane-machine-set-operator/0.log" Mar 20 09:06:38 crc kubenswrapper[5136]: I0320 09:06:38.492613 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbjpm_a3ca072d-707e-4c94-9b3a-81eabc72f840/kube-rbac-proxy/0.log" Mar 20 09:06:38 crc kubenswrapper[5136]: I0320 09:06:38.541489 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-vbjpm_a3ca072d-707e-4c94-9b3a-81eabc72f840/machine-api-operator/0.log" Mar 20 09:06:43 crc kubenswrapper[5136]: I0320 09:06:43.918087 5136 scope.go:117] "RemoveContainer" containerID="d46abf622d038618ca2e56c8ba50c8df50e7f199364c722c3c72d53324ea811a" Mar 20 09:06:45 crc kubenswrapper[5136]: I0320 09:06:45.821845 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:06:45 crc kubenswrapper[5136]: I0320 09:06:45.822193 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:06:50 crc kubenswrapper[5136]: I0320 09:06:50.648411 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-d4w65_b06e6b2d-fcba-4ba1-9ba1-82585032b382/cert-manager-controller/0.log" Mar 20 09:06:50 crc kubenswrapper[5136]: I0320 09:06:50.843421 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-4757p_f1c160ca-0866-46ab-859c-8557dc65e962/cert-manager-cainjector/0.log" Mar 20 09:06:50 crc kubenswrapper[5136]: I0320 09:06:50.939126 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-4l568_6168deec-ad68-4f6d-9736-422a6c7ade08/cert-manager-webhook/0.log" Mar 20 09:07:02 crc kubenswrapper[5136]: I0320 09:07:02.786110 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-rsxkf_3bdd0e88-cfa4-410a-b619-7918a813120d/nmstate-console-plugin/0.log" Mar 20 09:07:02 crc kubenswrapper[5136]: I0320 09:07:02.941028 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-7bqsc_43a9811e-7a36-4f11-9f02-ac3e4c00c42d/nmstate-handler/0.log" Mar 20 09:07:02 crc kubenswrapper[5136]: I0320 09:07:02.996960 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-dxl94_21fd222d-3101-4c49-bbca-611916a57ae8/nmstate-metrics/0.log" Mar 20 09:07:03 crc kubenswrapper[5136]: I0320 09:07:03.000204 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-dxl94_21fd222d-3101-4c49-bbca-611916a57ae8/kube-rbac-proxy/0.log" Mar 20 09:07:03 crc kubenswrapper[5136]: I0320 09:07:03.133537 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-mzffz_94018849-bf2a-47b4-be05-5e9ff0e0dfbd/nmstate-operator/0.log" Mar 20 09:07:03 crc kubenswrapper[5136]: I0320 09:07:03.213253 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-k7799_a6f3f958-ebef-4d11-be1e-1cd2d431006c/nmstate-webhook/0.log" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.821960 5136 patch_prober.go:28] interesting pod/machine-config-daemon-jst28 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.822479 5136 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.822524 5136 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-jst28" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.823173 5136 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51"} pod="openshift-machine-config-operator/machine-config-daemon-jst28" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 20 09:07:15 crc kubenswrapper[5136]: I0320 09:07:15.823228 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerName="machine-config-daemon" containerID="cri-o://85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" gracePeriod=600 Mar 20 09:07:15 crc kubenswrapper[5136]: E0320 09:07:15.944308 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.234690 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-pw7kx_b1998fd9-5100-4819-83d9-61c453df2121/prometheus-operator/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.362760 5136 generic.go:334] "Generic (PLEG): container finished" podID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" exitCode=0 Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.362801 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-jst28" event={"ID":"f64ebce8-37f2-4631-9b8b-d34ebc9b93ba","Type":"ContainerDied","Data":"85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51"} Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.362847 5136 scope.go:117] "RemoveContainer" containerID="052911170bf346d7ceda8571bf74edeeb05f27214bc5f82c24d971afe343a42b" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.363362 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:16 crc kubenswrapper[5136]: E0320 09:07:16.363649 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.441200 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k_6e3b7b66-720f-451e-b76c-d14672876450/prometheus-operator-admission-webhook/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.441830 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg_e648d436-8985-4d18-83b2-8401e5e3b301/prometheus-operator-admission-webhook/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.612954 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-7pqgh_cbf95789-daee-44bb-9d6a-a5b503c0b1e1/operator/0.log" Mar 20 09:07:16 crc kubenswrapper[5136]: I0320 09:07:16.638499 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-7979496b84-bg2n6_0e3c2d08-6905-419d-a0d6-f4935119b632/perses-operator/0.log" Mar 20 09:07:28 crc kubenswrapper[5136]: I0320 09:07:28.420579 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:28 crc kubenswrapper[5136]: E0320 09:07:28.421292 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.188748 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dzzhq_4c981a48-1ae6-4c06-90ed-4333de6a14d2/kube-rbac-proxy/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.407125 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.606685 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.635655 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.666180 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.788471 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-dzzhq_4c981a48-1ae6-4c06-90ed-4333de6a14d2/controller/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.812203 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.995929 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:29 crc kubenswrapper[5136]: I0320 09:07:29.998719 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.009263 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.019995 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.171244 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-frr-files/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.200915 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/controller/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.209297 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.210128 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/cp-reloader/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.372137 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/kube-rbac-proxy-frr/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.415321 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/kube-rbac-proxy/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.449535 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/frr-metrics/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.592547 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/reloader/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.711708 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-b8fzm_037785f1-4827-4473-8997-20cdc8fec776/frr-k8s-webhook-server/0.log" Mar 20 09:07:30 crc kubenswrapper[5136]: I0320 09:07:30.926201 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-76dc698dd8-wkrqn_8738cb21-39f9-4eeb-90fc-f512d95642f3/manager/0.log" Mar 20 09:07:31 crc kubenswrapper[5136]: I0320 09:07:31.057760 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-787f65f959-lkczj_f9ad7722-3864-444d-92a1-235de7707fe4/webhook-server/0.log" Mar 20 09:07:31 crc kubenswrapper[5136]: I0320 09:07:31.124102 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nrftr_d54436ca-ad6f-41c2-ae88-703f150229fc/kube-rbac-proxy/0.log" Mar 20 09:07:32 crc kubenswrapper[5136]: I0320 09:07:32.036917 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-nrftr_d54436ca-ad6f-41c2-ae88-703f150229fc/speaker/0.log" Mar 20 09:07:33 crc kubenswrapper[5136]: I0320 09:07:33.713492 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-bjq5z_11c03832-f8fc-4790-98f6-43290c528ce9/frr/0.log" Mar 20 09:07:43 crc kubenswrapper[5136]: I0320 09:07:43.397512 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:43 crc kubenswrapper[5136]: E0320 09:07:43.398417 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:43 crc kubenswrapper[5136]: I0320 09:07:43.983743 5136 scope.go:117] "RemoveContainer" containerID="e9acc33cb6ef33f971afc8e98aee5abda02ae0be42cbb3e0b4beb36ffafb1e4d" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.039445 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.227484 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.255464 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.301892 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.463494 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.493113 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.505477 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874lbvdj_3cef4dfa-acd1-43f2-adaa-3af5f28046f9/extract/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.629589 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.798958 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.826485 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/pull/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.841412 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.990491 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/util/0.log" Mar 20 09:07:44 crc kubenswrapper[5136]: I0320 09:07:44.995167 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/extract/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.008440 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1dzgwn_900e35e2-638e-47f2-8943-1642ed3ccc59/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.154753 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.318939 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.323906 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.365563 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.517185 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/extract/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.530333 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.559881 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5znfpl_bd4f9716-cbae-44b8-ba7a-44aaa92dae66/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.678573 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.902642 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/util/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.904241 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/pull/0.log" Mar 20 09:07:45 crc kubenswrapper[5136]: I0320 09:07:45.920474 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/pull/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.060010 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/util/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.084385 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/pull/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.104901 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_93d662022be5376a0ed3676a120a68427f47e4653a19a985adf9239726vblpn_380bd027-6e4d-49b8-af6b-db5cd8b06635/extract/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.231106 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-utilities/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.406669 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-content/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.448297 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-content/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.469740 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-utilities/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.785467 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-utilities/0.log" Mar 20 09:07:46 crc kubenswrapper[5136]: I0320 09:07:46.812099 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.057858 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-utilities/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.174131 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.233488 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-utilities/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.313793 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.449596 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-utilities/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.482670 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/extract-content/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.740774 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-sl2lb_37de93ad-331e-41ee-8f74-523100e01b09/marketplace-operator/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.752437 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-lbsbr_221d005e-2b68-4835-9bcc-69b3d391e37f/registry-server/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.881266 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-598hk_6b1bb4bc-89fb-4965-892b-8db898976bc0/registry-server/0.log" Mar 20 09:07:47 crc kubenswrapper[5136]: I0320 09:07:47.899955 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.048352 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.060635 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.074948 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.265283 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.274856 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.340605 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.523163 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.587869 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.588028 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-2mt29_75cf71d1-5e27-4089-bf58-1f389690d498/registry-server/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.596019 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-content/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.716356 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-utilities/0.log" Mar 20 09:07:48 crc kubenswrapper[5136]: I0320 09:07:48.720577 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/extract-content/0.log" Mar 20 09:07:49 crc kubenswrapper[5136]: I0320 09:07:49.667453 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zsmxp_2d0ba076-45a3-4e99-80de-774db592dfc5/registry-server/0.log" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.294490 5136 kuberuntime_container.go:700] "PreStop hook not completed in grace period" pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" containerID="cri-o://f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" gracePeriod=300 Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.294886 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" containerID="cri-o://f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" gracePeriod=2 Mar 20 09:07:51 crc kubenswrapper[5136]: E0320 09:07:51.306557 5136 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Mar 20 09:07:51 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/cleanup.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/cleanup.sh Mar 20 09:07:51 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_TYPE=nb Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_FILE=/etc/ovn/ovnnb_db.db Mar 20 09:07:51 crc kubenswrapper[5136]: + DB_NAME=OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ nb == \s\b ]] Mar 20 09:07:51 crc kubenswrapper[5136]: ++ hostname Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ ovsdbserver-nb-2 != \o\v\s\d\b\s\e\r\v\e\r\-\n\b\-\0 ]] Mar 20 09:07:51 crc kubenswrapper[5136]: + ovs-appctl -t /tmp/ovnnb_db.ctl cluster/leave OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: > execCommand=["/usr/local/bin/container-scripts/cleanup.sh"] containerName="ovsdbserver-nb" pod="openstack/ovsdbserver-nb-2" message=< Mar 20 09:07:51 crc kubenswrapper[5136]: ++ dirname /usr/local/bin/container-scripts/cleanup.sh Mar 20 09:07:51 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_TYPE=nb Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_FILE=/etc/ovn/ovnnb_db.db Mar 20 09:07:51 crc kubenswrapper[5136]: + DB_NAME=OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ nb == \s\b ]] Mar 20 09:07:51 crc kubenswrapper[5136]: ++ hostname Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ ovsdbserver-nb-2 != \o\v\s\d\b\s\e\r\v\e\r\-\n\b\-\0 ]] Mar 20 09:07:51 crc kubenswrapper[5136]: + ovs-appctl -t /tmp/ovnnb_db.ctl cluster/leave OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: > Mar 20 09:07:51 crc kubenswrapper[5136]: E0320 09:07:51.306958 5136 kuberuntime_container.go:691] "PreStop hook failed" err=< Mar 20 09:07:51 crc kubenswrapper[5136]: command '/usr/local/bin/container-scripts/cleanup.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/cleanup.sh Mar 20 09:07:51 crc kubenswrapper[5136]: + source /usr/local/bin/container-scripts/functions Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_TYPE=nb Mar 20 09:07:51 crc kubenswrapper[5136]: ++ DB_FILE=/etc/ovn/ovnnb_db.db Mar 20 09:07:51 crc kubenswrapper[5136]: + DB_NAME=OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ nb == \s\b ]] Mar 20 09:07:51 crc kubenswrapper[5136]: ++ hostname Mar 20 09:07:51 crc kubenswrapper[5136]: + [[ ovsdbserver-nb-2 != \o\v\s\d\b\s\e\r\v\e\r\-\n\b\-\0 ]] Mar 20 09:07:51 crc kubenswrapper[5136]: + ovs-appctl -t /tmp/ovnnb_db.ctl cluster/leave OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: + true Mar 20 09:07:51 crc kubenswrapper[5136]: ++ ovs-appctl -t /tmp/ovnnb_db.ctl cluster/status OVN_Northbound Mar 20 09:07:51 crc kubenswrapper[5136]: ++ grep Status: Mar 20 09:07:51 crc kubenswrapper[5136]: ++ awk -e '{print $2}' Mar 20 09:07:51 crc kubenswrapper[5136]: + STATUS=leaving Mar 20 09:07:51 crc kubenswrapper[5136]: + '[' -z leaving -o xleaving = 'xleft cluster' ']' Mar 20 09:07:51 crc kubenswrapper[5136]: + sleep 1 Mar 20 09:07:51 crc kubenswrapper[5136]: > pod="openstack/ovsdbserver-nb-2" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" containerID="cri-o://f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.611851 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.611907 5136 generic.go:334] "Generic (PLEG): container finished" podID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerID="f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" exitCode=143 Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.611941 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerDied","Data":"f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5"} Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.766575 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.766932 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.936848 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.936931 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.936974 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937587 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937713 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937767 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937809 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937864 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts" (OuterVolumeSpecName: "scripts") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937897 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.937933 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") pod \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\" (UID: \"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4\") " Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938106 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config" (OuterVolumeSpecName: "config") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938317 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938362 5136 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-scripts\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.938376 5136 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-config\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.942712 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw" (OuterVolumeSpecName: "kube-api-access-hznvw") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "kube-api-access-hznvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.948124 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7". PluginName "kubernetes.io/csi", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.956664 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.985180 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:07:51 crc kubenswrapper[5136]: I0320 09:07:51.994781 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" (UID: "c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039738 5136 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039856 5136 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") on node \"crc\" " Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039876 5136 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039891 5136 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.039902 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hznvw\" (UniqueName: \"kubernetes.io/projected/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4-kube-api-access-hznvw\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.072651 5136 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.072847 5136 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7") on node "crc" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.141125 5136 reconciler_common.go:293] "Volume detached for volume \"pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-690d9ba6-faee-4e76-b872-57aa5a2845d7\") on node \"crc\" DevicePath \"\"" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622183 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-2_c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/ovsdbserver-nb/0.log" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622245 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-2" event={"ID":"c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4","Type":"ContainerDied","Data":"3ed0d6ce099ff1f367dca760d285f74c76c709803b8a996aaf1abb5243af3234"} Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622290 5136 scope.go:117] "RemoveContainer" containerID="aa7d63dd9f5d69196ca03f337b7c8a99ee8f9a0db5fd272afae4861760ebba16" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.622370 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-2" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.650533 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.656642 5136 scope.go:117] "RemoveContainer" containerID="f34562d040e4070527490f6678fe0df2ac9672c9008bef502d57527a880a76d5" Mar 20 09:07:52 crc kubenswrapper[5136]: I0320 09:07:52.658667 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-2"] Mar 20 09:07:54 crc kubenswrapper[5136]: I0320 09:07:54.396594 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:07:54 crc kubenswrapper[5136]: E0320 09:07:54.397175 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:07:54 crc kubenswrapper[5136]: I0320 09:07:54.408567 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" path="/var/lib/kubelet/pods/c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4/volumes" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.136439 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566628-l9k2s"] Mar 20 09:08:00 crc kubenswrapper[5136]: E0320 09:08:00.137579 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137595 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" Mar 20 09:08:00 crc kubenswrapper[5136]: E0320 09:08:00.137621 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137631 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" Mar 20 09:08:00 crc kubenswrapper[5136]: E0320 09:08:00.137646 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerName="oc" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137654 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerName="oc" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137829 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="ovsdbserver-nb" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137857 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" containerName="oc" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.137879 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2e6ed27-57ea-4ea9-9d66-e1088b5a07d4" containerName="openstack-network-exporter" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.138434 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.141064 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.141114 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.141303 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.148390 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566628-l9k2s"] Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.250019 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"auto-csr-approver-29566628-l9k2s\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.351293 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"auto-csr-approver-29566628-l9k2s\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.375117 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"auto-csr-approver-29566628-l9k2s\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.455913 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.774397 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-lf96k_6e3b7b66-720f-451e-b76c-d14672876450/prometheus-operator-admission-webhook/0.log" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.804153 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-8ff7d675-pw7kx_b1998fd9-5100-4819-83d9-61c453df2121/prometheus-operator/0.log" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.807002 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c5db4f9fc-tqkzg_e648d436-8985-4d18-83b2-8401e5e3b301/prometheus-operator-admission-webhook/0.log" Mar 20 09:08:00 crc kubenswrapper[5136]: I0320 09:08:00.894047 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566628-l9k2s"] Mar 20 09:08:01 crc kubenswrapper[5136]: I0320 09:08:01.008513 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-6dd7dd855f-7pqgh_cbf95789-daee-44bb-9d6a-a5b503c0b1e1/operator/0.log" Mar 20 09:08:01 crc kubenswrapper[5136]: I0320 09:08:01.058473 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-7979496b84-bg2n6_0e3c2d08-6905-419d-a0d6-f4935119b632/perses-operator/0.log" Mar 20 09:08:01 crc kubenswrapper[5136]: I0320 09:08:01.681422 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" event={"ID":"795d143d-8524-469f-bf25-830fe5e73bce","Type":"ContainerStarted","Data":"e818a2d0d5694696160c1421e3fb3394393ab01522e8e65fc73d2d3a566735fb"} Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.151608 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.153803 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.176565 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.297967 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.298082 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.298115 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.399785 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.399887 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.399979 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.400527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.400587 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.425510 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"redhat-marketplace-j2szc\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.479334 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.702669 5136 generic.go:334] "Generic (PLEG): container finished" podID="795d143d-8524-469f-bf25-830fe5e73bce" containerID="ee5576ee1e2eb8b6c2ee48065627daac8c0e821b5702496a43cfbcccf410f4a1" exitCode=0 Mar 20 09:08:03 crc kubenswrapper[5136]: I0320 09:08:03.702759 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" event={"ID":"795d143d-8524-469f-bf25-830fe5e73bce","Type":"ContainerDied","Data":"ee5576ee1e2eb8b6c2ee48065627daac8c0e821b5702496a43cfbcccf410f4a1"} Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.014800 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:04 crc kubenswrapper[5136]: W0320 09:08:04.019275 5136 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbfd87b8_0fff_41eb_a772_a9481ded678f.slice/crio-0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997 WatchSource:0}: Error finding container 0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997: Status 404 returned error can't find the container with id 0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997 Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.715024 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" exitCode=0 Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.715279 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8"} Mar 20 09:08:04 crc kubenswrapper[5136]: I0320 09:08:04.715698 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerStarted","Data":"0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997"} Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.033638 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.224229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") pod \"795d143d-8524-469f-bf25-830fe5e73bce\" (UID: \"795d143d-8524-469f-bf25-830fe5e73bce\") " Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.230887 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s" (OuterVolumeSpecName: "kube-api-access-zrk2s") pod "795d143d-8524-469f-bf25-830fe5e73bce" (UID: "795d143d-8524-469f-bf25-830fe5e73bce"). InnerVolumeSpecName "kube-api-access-zrk2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.326920 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrk2s\" (UniqueName: \"kubernetes.io/projected/795d143d-8524-469f-bf25-830fe5e73bce-kube-api-access-zrk2s\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.397758 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:05 crc kubenswrapper[5136]: E0320 09:08:05.398288 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.733830 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerStarted","Data":"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d"} Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.735421 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" event={"ID":"795d143d-8524-469f-bf25-830fe5e73bce","Type":"ContainerDied","Data":"e818a2d0d5694696160c1421e3fb3394393ab01522e8e65fc73d2d3a566735fb"} Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.735471 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e818a2d0d5694696160c1421e3fb3394393ab01522e8e65fc73d2d3a566735fb" Mar 20 09:08:05 crc kubenswrapper[5136]: I0320 09:08:05.735500 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566628-l9k2s" Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.128659 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.137522 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566622-tbs2b"] Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.406373 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eeb9dd63-3112-441b-961e-b61a752527d8" path="/var/lib/kubelet/pods/eeb9dd63-3112-441b-961e-b61a752527d8/volumes" Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.746612 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" exitCode=0 Mar 20 09:08:06 crc kubenswrapper[5136]: I0320 09:08:06.746691 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d"} Mar 20 09:08:07 crc kubenswrapper[5136]: I0320 09:08:07.757209 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerStarted","Data":"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46"} Mar 20 09:08:07 crc kubenswrapper[5136]: I0320 09:08:07.786933 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j2szc" podStartSLOduration=2.231745033 podStartE2EDuration="4.786916864s" podCreationTimestamp="2026-03-20 09:08:03 +0000 UTC" firstStartedPulling="2026-03-20 09:08:04.718309177 +0000 UTC m=+8316.977620328" lastFinishedPulling="2026-03-20 09:08:07.273480998 +0000 UTC m=+8319.532792159" observedRunningTime="2026-03-20 09:08:07.782533688 +0000 UTC m=+8320.041844869" watchObservedRunningTime="2026-03-20 09:08:07.786916864 +0000 UTC m=+8320.046228005" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.479698 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.482305 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.525390 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.842806 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:13 crc kubenswrapper[5136]: I0320 09:08:13.884491 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:15 crc kubenswrapper[5136]: I0320 09:08:15.817993 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j2szc" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" containerID="cri-o://8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" gracePeriod=2 Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.186481 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.187174 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="795d143d-8524-469f-bf25-830fe5e73bce" containerName="oc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.187202 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="795d143d-8524-469f-bf25-830fe5e73bce" containerName="oc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.187472 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="795d143d-8524-469f-bf25-830fe5e73bce" containerName="oc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.189101 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.203062 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.234374 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.358879 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") pod \"fbfd87b8-0fff-41eb-a772-a9481ded678f\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.359006 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") pod \"fbfd87b8-0fff-41eb-a772-a9481ded678f\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.359056 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") pod \"fbfd87b8-0fff-41eb-a772-a9481ded678f\" (UID: \"fbfd87b8-0fff-41eb-a772-a9481ded678f\") " Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.359842 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities" (OuterVolumeSpecName: "utilities") pod "fbfd87b8-0fff-41eb-a772-a9481ded678f" (UID: "fbfd87b8-0fff-41eb-a772-a9481ded678f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360073 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360161 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360187 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.360355 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.365318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw" (OuterVolumeSpecName: "kube-api-access-xn2sw") pod "fbfd87b8-0fff-41eb-a772-a9481ded678f" (UID: "fbfd87b8-0fff-41eb-a772-a9481ded678f"). InnerVolumeSpecName "kube-api-access-xn2sw". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.388263 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fbfd87b8-0fff-41eb-a772-a9481ded678f" (UID: "fbfd87b8-0fff-41eb-a772-a9481ded678f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462372 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462532 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462581 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462730 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fbfd87b8-0fff-41eb-a772-a9481ded678f-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.462754 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn2sw\" (UniqueName: \"kubernetes.io/projected/fbfd87b8-0fff-41eb-a772-a9481ded678f-kube-api-access-xn2sw\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.463527 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.463547 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.482053 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"redhat-operators-dxsd5\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.562908 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834882 5136 generic.go:334] "Generic (PLEG): container finished" podID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" exitCode=0 Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834934 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46"} Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834964 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j2szc" event={"ID":"fbfd87b8-0fff-41eb-a772-a9481ded678f","Type":"ContainerDied","Data":"0dc5ae11dd0443b183d8eb55a27c7c31a97362c70a8bed1b33bd896a54bff997"} Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.834991 5136 scope.go:117] "RemoveContainer" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.836102 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j2szc" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.869076 5136 scope.go:117] "RemoveContainer" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.870104 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.878459 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j2szc"] Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.885606 5136 scope.go:117] "RemoveContainer" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.902144 5136 scope.go:117] "RemoveContainer" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.902973 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46\": container with ID starting with 8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46 not found: ID does not exist" containerID="8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903173 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46"} err="failed to get container status \"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46\": rpc error: code = NotFound desc = could not find container \"8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46\": container with ID starting with 8dad4e723d9c86256ffbc98c7c59661c6f6ec163a86052c6de6e24cf75b6df46 not found: ID does not exist" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903284 5136 scope.go:117] "RemoveContainer" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.903741 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d\": container with ID starting with 08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d not found: ID does not exist" containerID="08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903789 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d"} err="failed to get container status \"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d\": rpc error: code = NotFound desc = could not find container \"08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d\": container with ID starting with 08dff1d4628188ef66aa86f3deca9118eb1c8e3543076d13d91119a527a9b70d not found: ID does not exist" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.903833 5136 scope.go:117] "RemoveContainer" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" Mar 20 09:08:16 crc kubenswrapper[5136]: E0320 09:08:16.904211 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8\": container with ID starting with a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8 not found: ID does not exist" containerID="a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8" Mar 20 09:08:16 crc kubenswrapper[5136]: I0320 09:08:16.904248 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8"} err="failed to get container status \"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8\": rpc error: code = NotFound desc = could not find container \"a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8\": container with ID starting with a7680028346410bcefda3f9ea7325d064152605e874ea5979e8c3bb57ca8eec8 not found: ID does not exist" Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.006611 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.397326 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:17 crc kubenswrapper[5136]: E0320 09:08:17.397871 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.841852 5136 generic.go:334] "Generic (PLEG): container finished" podID="21198dc8-ca82-4022-a042-15a080d02f43" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" exitCode=0 Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.841914 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce"} Mar 20 09:08:17 crc kubenswrapper[5136]: I0320 09:08:17.841939 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerStarted","Data":"404b76685edde1b05718f90ae969f60b35ee2dd05dd9ea25b9aa6b994a8e3f2c"} Mar 20 09:08:18 crc kubenswrapper[5136]: I0320 09:08:18.408573 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" path="/var/lib/kubelet/pods/fbfd87b8-0fff-41eb-a772-a9481ded678f/volumes" Mar 20 09:08:18 crc kubenswrapper[5136]: I0320 09:08:18.853133 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerStarted","Data":"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805"} Mar 20 09:08:22 crc kubenswrapper[5136]: I0320 09:08:22.885250 5136 generic.go:334] "Generic (PLEG): container finished" podID="21198dc8-ca82-4022-a042-15a080d02f43" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" exitCode=0 Mar 20 09:08:22 crc kubenswrapper[5136]: I0320 09:08:22.885578 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805"} Mar 20 09:08:23 crc kubenswrapper[5136]: I0320 09:08:23.894542 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerStarted","Data":"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe"} Mar 20 09:08:23 crc kubenswrapper[5136]: I0320 09:08:23.928988 5136 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxsd5" podStartSLOduration=2.416432287 podStartE2EDuration="7.928961302s" podCreationTimestamp="2026-03-20 09:08:16 +0000 UTC" firstStartedPulling="2026-03-20 09:08:17.843139482 +0000 UTC m=+8330.102450633" lastFinishedPulling="2026-03-20 09:08:23.355668497 +0000 UTC m=+8335.614979648" observedRunningTime="2026-03-20 09:08:23.911653855 +0000 UTC m=+8336.170965036" watchObservedRunningTime="2026-03-20 09:08:23.928961302 +0000 UTC m=+8336.188272483" Mar 20 09:08:26 crc kubenswrapper[5136]: I0320 09:08:26.563309 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:26 crc kubenswrapper[5136]: I0320 09:08:26.563648 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:27 crc kubenswrapper[5136]: I0320 09:08:27.612490 5136 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dxsd5" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" probeResult="failure" output=< Mar 20 09:08:27 crc kubenswrapper[5136]: timeout: failed to connect service ":50051" within 1s Mar 20 09:08:27 crc kubenswrapper[5136]: > Mar 20 09:08:29 crc kubenswrapper[5136]: I0320 09:08:29.397443 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:29 crc kubenswrapper[5136]: E0320 09:08:29.398010 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:36 crc kubenswrapper[5136]: I0320 09:08:36.610858 5136 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:36 crc kubenswrapper[5136]: I0320 09:08:36.662000 5136 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:36 crc kubenswrapper[5136]: I0320 09:08:36.852485 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.024718 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxsd5" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" containerID="cri-o://9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" gracePeriod=2 Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.406579 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.479890 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") pod \"21198dc8-ca82-4022-a042-15a080d02f43\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.479981 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") pod \"21198dc8-ca82-4022-a042-15a080d02f43\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.480079 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") pod \"21198dc8-ca82-4022-a042-15a080d02f43\" (UID: \"21198dc8-ca82-4022-a042-15a080d02f43\") " Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.480908 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities" (OuterVolumeSpecName: "utilities") pod "21198dc8-ca82-4022-a042-15a080d02f43" (UID: "21198dc8-ca82-4022-a042-15a080d02f43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.499824 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4" (OuterVolumeSpecName: "kube-api-access-t29w4") pod "21198dc8-ca82-4022-a042-15a080d02f43" (UID: "21198dc8-ca82-4022-a042-15a080d02f43"). InnerVolumeSpecName "kube-api-access-t29w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.581278 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t29w4\" (UniqueName: \"kubernetes.io/projected/21198dc8-ca82-4022-a042-15a080d02f43-kube-api-access-t29w4\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.581320 5136 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-utilities\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.610690 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21198dc8-ca82-4022-a042-15a080d02f43" (UID: "21198dc8-ca82-4022-a042-15a080d02f43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:08:38 crc kubenswrapper[5136]: I0320 09:08:38.683069 5136 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21198dc8-ca82-4022-a042-15a080d02f43-catalog-content\") on node \"crc\" DevicePath \"\"" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039662 5136 generic.go:334] "Generic (PLEG): container finished" podID="21198dc8-ca82-4022-a042-15a080d02f43" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" exitCode=0 Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039747 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe"} Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039793 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxsd5" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039843 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxsd5" event={"ID":"21198dc8-ca82-4022-a042-15a080d02f43","Type":"ContainerDied","Data":"404b76685edde1b05718f90ae969f60b35ee2dd05dd9ea25b9aa6b994a8e3f2c"} Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.039891 5136 scope.go:117] "RemoveContainer" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.064954 5136 scope.go:117] "RemoveContainer" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.094405 5136 scope.go:117] "RemoveContainer" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.102464 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.110411 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxsd5"] Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.138182 5136 scope.go:117] "RemoveContainer" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" Mar 20 09:08:39 crc kubenswrapper[5136]: E0320 09:08:39.139112 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe\": container with ID starting with 9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe not found: ID does not exist" containerID="9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139201 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe"} err="failed to get container status \"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe\": rpc error: code = NotFound desc = could not find container \"9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe\": container with ID starting with 9a20fe5703be0703fe37136fddaf3b252c87d7855ab13dd71377a48813fb83fe not found: ID does not exist" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139239 5136 scope.go:117] "RemoveContainer" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" Mar 20 09:08:39 crc kubenswrapper[5136]: E0320 09:08:39.139622 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805\": container with ID starting with 8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805 not found: ID does not exist" containerID="8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139667 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805"} err="failed to get container status \"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805\": rpc error: code = NotFound desc = could not find container \"8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805\": container with ID starting with 8411e946201b01caca96a2ea3b91ba7a0c90acb1ff8bd128c91466c83bf62805 not found: ID does not exist" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.139695 5136 scope.go:117] "RemoveContainer" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" Mar 20 09:08:39 crc kubenswrapper[5136]: E0320 09:08:39.140703 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce\": container with ID starting with 0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce not found: ID does not exist" containerID="0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce" Mar 20 09:08:39 crc kubenswrapper[5136]: I0320 09:08:39.140745 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce"} err="failed to get container status \"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce\": rpc error: code = NotFound desc = could not find container \"0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce\": container with ID starting with 0641853ff5b42a9782f7027c13abdd6a6c55215614ec6ef3320fc869bccf9fce not found: ID does not exist" Mar 20 09:08:40 crc kubenswrapper[5136]: I0320 09:08:40.401343 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:40 crc kubenswrapper[5136]: E0320 09:08:40.401512 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:08:40 crc kubenswrapper[5136]: I0320 09:08:40.407799 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21198dc8-ca82-4022-a042-15a080d02f43" path="/var/lib/kubelet/pods/21198dc8-ca82-4022-a042-15a080d02f43/volumes" Mar 20 09:08:44 crc kubenswrapper[5136]: I0320 09:08:44.063971 5136 scope.go:117] "RemoveContainer" containerID="d110e85766974db9b00f23e4ec0b43a5d95e3bc9caa9f95ded6497351baab885" Mar 20 09:08:52 crc kubenswrapper[5136]: I0320 09:08:52.402765 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:08:52 crc kubenswrapper[5136]: E0320 09:08:52.403646 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:06 crc kubenswrapper[5136]: I0320 09:09:06.245895 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" exitCode=0 Mar 20 09:09:06 crc kubenswrapper[5136]: I0320 09:09:06.245965 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" event={"ID":"ca8086a5-288f-4e6b-80ae-07842239f3a9","Type":"ContainerDied","Data":"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32"} Mar 20 09:09:06 crc kubenswrapper[5136]: I0320 09:09:06.247744 5136 scope.go:117] "RemoveContainer" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:07 crc kubenswrapper[5136]: I0320 09:09:07.109097 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ppsnr_must-gather-8lzbv_ca8086a5-288f-4e6b-80ae-07842239f3a9/gather/0.log" Mar 20 09:09:07 crc kubenswrapper[5136]: I0320 09:09:07.397051 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:07 crc kubenswrapper[5136]: E0320 09:09:07.397303 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.206230 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.207039 5136 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" containerID="cri-o://908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" gracePeriod=2 Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.214049 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-ppsnr/must-gather-8lzbv"] Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.679682 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ppsnr_must-gather-8lzbv_ca8086a5-288f-4e6b-80ae-07842239f3a9/copy/0.log" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.680523 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.766016 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") pod \"ca8086a5-288f-4e6b-80ae-07842239f3a9\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.766144 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") pod \"ca8086a5-288f-4e6b-80ae-07842239f3a9\" (UID: \"ca8086a5-288f-4e6b-80ae-07842239f3a9\") " Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.774002 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8" (OuterVolumeSpecName: "kube-api-access-7qnw8") pod "ca8086a5-288f-4e6b-80ae-07842239f3a9" (UID: "ca8086a5-288f-4e6b-80ae-07842239f3a9"). InnerVolumeSpecName "kube-api-access-7qnw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.867798 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qnw8\" (UniqueName: \"kubernetes.io/projected/ca8086a5-288f-4e6b-80ae-07842239f3a9-kube-api-access-7qnw8\") on node \"crc\" DevicePath \"\"" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.889184 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ca8086a5-288f-4e6b-80ae-07842239f3a9" (UID: "ca8086a5-288f-4e6b-80ae-07842239f3a9"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 20 09:09:14 crc kubenswrapper[5136]: I0320 09:09:14.969599 5136 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ca8086a5-288f-4e6b-80ae-07842239f3a9-must-gather-output\") on node \"crc\" DevicePath \"\"" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.335762 5136 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-ppsnr_must-gather-8lzbv_ca8086a5-288f-4e6b-80ae-07842239f3a9/copy/0.log" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.337663 5136 generic.go:334] "Generic (PLEG): container finished" podID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" exitCode=143 Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.337714 5136 scope.go:117] "RemoveContainer" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.337860 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-ppsnr/must-gather-8lzbv" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.359758 5136 scope.go:117] "RemoveContainer" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.422431 5136 scope.go:117] "RemoveContainer" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" Mar 20 09:09:15 crc kubenswrapper[5136]: E0320 09:09:15.422981 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307\": container with ID starting with 908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307 not found: ID does not exist" containerID="908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.423010 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307"} err="failed to get container status \"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307\": rpc error: code = NotFound desc = could not find container \"908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307\": container with ID starting with 908ad886f5a62adfaa68fa4a7daa63fc90813986271a90584d8bbe3e41a99307 not found: ID does not exist" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.423032 5136 scope.go:117] "RemoveContainer" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:15 crc kubenswrapper[5136]: E0320 09:09:15.424335 5136 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32\": container with ID starting with f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32 not found: ID does not exist" containerID="f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32" Mar 20 09:09:15 crc kubenswrapper[5136]: I0320 09:09:15.424409 5136 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32"} err="failed to get container status \"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32\": rpc error: code = NotFound desc = could not find container \"f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32\": container with ID starting with f062bb8f36e86ce4d24e41f10a6b67a6be5f0f5fca3b905ceabe12f658332a32 not found: ID does not exist" Mar 20 09:09:16 crc kubenswrapper[5136]: I0320 09:09:16.405784 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" path="/var/lib/kubelet/pods/ca8086a5-288f-4e6b-80ae-07842239f3a9/volumes" Mar 20 09:09:18 crc kubenswrapper[5136]: I0320 09:09:18.402493 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:18 crc kubenswrapper[5136]: E0320 09:09:18.402931 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:32 crc kubenswrapper[5136]: I0320 09:09:32.397209 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:32 crc kubenswrapper[5136]: E0320 09:09:32.397954 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:43 crc kubenswrapper[5136]: I0320 09:09:43.396494 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:43 crc kubenswrapper[5136]: E0320 09:09:43.397216 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:09:55 crc kubenswrapper[5136]: I0320 09:09:55.396535 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:09:55 crc kubenswrapper[5136]: E0320 09:09:55.397280 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.139547 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566630-qkw29"] Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140249 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140265 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140279 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140286 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140303 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="gather" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140312 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="gather" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140327 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140366 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-content" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140392 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140400 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140412 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140420 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140434 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140442 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: E0320 09:10:00.140452 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140460 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="extract-utilities" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140629 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfd87b8-0fff-41eb-a772-a9481ded678f" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140642 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="copy" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140661 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="21198dc8-ca82-4022-a042-15a080d02f43" containerName="registry-server" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.140676 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca8086a5-288f-4e6b-80ae-07842239f3a9" containerName="gather" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.141249 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.144109 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.146731 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566630-qkw29"] Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.148044 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.149105 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.262336 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"auto-csr-approver-29566630-qkw29\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.364060 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"auto-csr-approver-29566630-qkw29\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.382249 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"auto-csr-approver-29566630-qkw29\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.471397 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:00 crc kubenswrapper[5136]: I0320 09:10:00.904780 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566630-qkw29"] Mar 20 09:10:01 crc kubenswrapper[5136]: I0320 09:10:01.736028 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566630-qkw29" event={"ID":"e61a3995-2869-48ee-b013-6698bf7a7ec3","Type":"ContainerStarted","Data":"16ac9a129f619a136d15dfc6b650e6952c26e592dfdef07e7627b286e34e745f"} Mar 20 09:10:02 crc kubenswrapper[5136]: I0320 09:10:02.744752 5136 generic.go:334] "Generic (PLEG): container finished" podID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerID="f1027819ae6f51f7f443cda60bb19c5e9832f7b10ca947cb4577ab36caa8c50e" exitCode=0 Mar 20 09:10:02 crc kubenswrapper[5136]: I0320 09:10:02.745143 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566630-qkw29" event={"ID":"e61a3995-2869-48ee-b013-6698bf7a7ec3","Type":"ContainerDied","Data":"f1027819ae6f51f7f443cda60bb19c5e9832f7b10ca947cb4577ab36caa8c50e"} Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.038437 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.118748 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") pod \"e61a3995-2869-48ee-b013-6698bf7a7ec3\" (UID: \"e61a3995-2869-48ee-b013-6698bf7a7ec3\") " Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.125318 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv" (OuterVolumeSpecName: "kube-api-access-prnjv") pod "e61a3995-2869-48ee-b013-6698bf7a7ec3" (UID: "e61a3995-2869-48ee-b013-6698bf7a7ec3"). InnerVolumeSpecName "kube-api-access-prnjv". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.220512 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prnjv\" (UniqueName: \"kubernetes.io/projected/e61a3995-2869-48ee-b013-6698bf7a7ec3-kube-api-access-prnjv\") on node \"crc\" DevicePath \"\"" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.761072 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566630-qkw29" event={"ID":"e61a3995-2869-48ee-b013-6698bf7a7ec3","Type":"ContainerDied","Data":"16ac9a129f619a136d15dfc6b650e6952c26e592dfdef07e7627b286e34e745f"} Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.761106 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ac9a129f619a136d15dfc6b650e6952c26e592dfdef07e7627b286e34e745f" Mar 20 09:10:04 crc kubenswrapper[5136]: I0320 09:10:04.761106 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566630-qkw29" Mar 20 09:10:05 crc kubenswrapper[5136]: I0320 09:10:05.113068 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:10:05 crc kubenswrapper[5136]: I0320 09:10:05.118893 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566624-n9gpj"] Mar 20 09:10:06 crc kubenswrapper[5136]: I0320 09:10:06.410098 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c69bbe6-8752-4d39-b2e4-2eab9134dbda" path="/var/lib/kubelet/pods/2c69bbe6-8752-4d39-b2e4-2eab9134dbda/volumes" Mar 20 09:10:08 crc kubenswrapper[5136]: I0320 09:10:08.400907 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:08 crc kubenswrapper[5136]: E0320 09:10:08.401190 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:19 crc kubenswrapper[5136]: I0320 09:10:19.396694 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:19 crc kubenswrapper[5136]: E0320 09:10:19.397422 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:32 crc kubenswrapper[5136]: I0320 09:10:32.397522 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:32 crc kubenswrapper[5136]: E0320 09:10:32.398601 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:43 crc kubenswrapper[5136]: I0320 09:10:43.396745 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:43 crc kubenswrapper[5136]: E0320 09:10:43.397547 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:10:44 crc kubenswrapper[5136]: I0320 09:10:44.227547 5136 scope.go:117] "RemoveContainer" containerID="35e33276bd939043cf0f403b9a2e455c0ebe9937a874e7a190c199a2c2c31266" Mar 20 09:10:57 crc kubenswrapper[5136]: I0320 09:10:57.397261 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:10:57 crc kubenswrapper[5136]: E0320 09:10:57.398189 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:09 crc kubenswrapper[5136]: I0320 09:11:09.396526 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:09 crc kubenswrapper[5136]: E0320 09:11:09.397072 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:21 crc kubenswrapper[5136]: I0320 09:11:21.396802 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:21 crc kubenswrapper[5136]: E0320 09:11:21.397673 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:32 crc kubenswrapper[5136]: I0320 09:11:32.396984 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:32 crc kubenswrapper[5136]: E0320 09:11:32.397989 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:44 crc kubenswrapper[5136]: I0320 09:11:44.286728 5136 scope.go:117] "RemoveContainer" containerID="cc5de32d3b2f3f78d2de7c2c04c373c815b2dc521f0ac4c71c7880cc929ccffe" Mar 20 09:11:44 crc kubenswrapper[5136]: I0320 09:11:44.397470 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:44 crc kubenswrapper[5136]: E0320 09:11:44.397707 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:11:57 crc kubenswrapper[5136]: I0320 09:11:57.397026 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:11:57 crc kubenswrapper[5136]: E0320 09:11:57.397714 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.139446 5136 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29566632-8vsxg"] Mar 20 09:12:00 crc kubenswrapper[5136]: E0320 09:12:00.140010 5136 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerName="oc" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.140025 5136 state_mem.go:107] "Deleted CPUSet assignment" podUID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerName="oc" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.140182 5136 memory_manager.go:354] "RemoveStaleState removing state" podUID="e61a3995-2869-48ee-b013-6698bf7a7ec3" containerName="oc" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.140676 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.143712 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.143862 5136 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.147415 5136 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-wg745" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.159097 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566632-8vsxg"] Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.243006 5136 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"auto-csr-approver-29566632-8vsxg\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.344426 5136 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"auto-csr-approver-29566632-8vsxg\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.366619 5136 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"auto-csr-approver-29566632-8vsxg\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.458926 5136 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.849312 5136 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29566632-8vsxg"] Mar 20 09:12:00 crc kubenswrapper[5136]: I0320 09:12:00.858050 5136 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 20 09:12:01 crc kubenswrapper[5136]: I0320 09:12:01.635398 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" event={"ID":"626f6cf8-8639-4b3a-a616-d9e67bcfed6a","Type":"ContainerStarted","Data":"49448043a17356eed0644eeff019252729dc712397ebbb1aa5df5ad7accb2043"} Mar 20 09:12:02 crc kubenswrapper[5136]: I0320 09:12:02.642826 5136 generic.go:334] "Generic (PLEG): container finished" podID="626f6cf8-8639-4b3a-a616-d9e67bcfed6a" containerID="04787b0e292ba80df64d898abc8997ab8fd28e7f944e47ef2ca1271229038838" exitCode=0 Mar 20 09:12:02 crc kubenswrapper[5136]: I0320 09:12:02.642868 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" event={"ID":"626f6cf8-8639-4b3a-a616-d9e67bcfed6a","Type":"ContainerDied","Data":"04787b0e292ba80df64d898abc8997ab8fd28e7f944e47ef2ca1271229038838"} Mar 20 09:12:03 crc kubenswrapper[5136]: I0320 09:12:03.916126 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:03 crc kubenswrapper[5136]: I0320 09:12:03.998229 5136 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") pod \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\" (UID: \"626f6cf8-8639-4b3a-a616-d9e67bcfed6a\") " Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.003898 5136 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq" (OuterVolumeSpecName: "kube-api-access-g5dwq") pod "626f6cf8-8639-4b3a-a616-d9e67bcfed6a" (UID: "626f6cf8-8639-4b3a-a616-d9e67bcfed6a"). InnerVolumeSpecName "kube-api-access-g5dwq". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.100032 5136 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5dwq\" (UniqueName: \"kubernetes.io/projected/626f6cf8-8639-4b3a-a616-d9e67bcfed6a-kube-api-access-g5dwq\") on node \"crc\" DevicePath \"\"" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.668300 5136 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" event={"ID":"626f6cf8-8639-4b3a-a616-d9e67bcfed6a","Type":"ContainerDied","Data":"49448043a17356eed0644eeff019252729dc712397ebbb1aa5df5ad7accb2043"} Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.668669 5136 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49448043a17356eed0644eeff019252729dc712397ebbb1aa5df5ad7accb2043" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.668328 5136 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29566632-8vsxg" Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.978539 5136 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:12:04 crc kubenswrapper[5136]: I0320 09:12:04.984268 5136 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29566626-v5t2c"] Mar 20 09:12:06 crc kubenswrapper[5136]: I0320 09:12:06.407326 5136 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0" path="/var/lib/kubelet/pods/c2990f35-bac2-4ca6-94e0-3a9d34bdd1d0/volumes" Mar 20 09:12:11 crc kubenswrapper[5136]: I0320 09:12:11.396645 5136 scope.go:117] "RemoveContainer" containerID="85a8be9045bc73fae3807cb7e0e2c254aed2c14f2129c76e77c69f4d3fef7c51" Mar 20 09:12:11 crc kubenswrapper[5136]: E0320 09:12:11.397485 5136 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-jst28_openshift-machine-config-operator(f64ebce8-37f2-4631-9b8b-d34ebc9b93ba)\"" pod="openshift-machine-config-operator/machine-config-daemon-jst28" podUID="f64ebce8-37f2-4631-9b8b-d34ebc9b93ba" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515157207574024462 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015157207575017400 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015157166072016517 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015157166072015467 5ustar corecore